2025-11-23 00:00:07.062937 | Job console starting 2025-11-23 00:00:07.098909 | Updating git repos 2025-11-23 00:00:07.227761 | Cloning repos into workspace 2025-11-23 00:00:07.462092 | Restoring repo states 2025-11-23 00:00:07.480109 | Merging changes 2025-11-23 00:00:07.480131 | Checking out repos 2025-11-23 00:00:07.892682 | Preparing playbooks 2025-11-23 00:00:08.662623 | Running Ansible setup 2025-11-23 00:00:14.460115 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-11-23 00:00:16.209424 | 2025-11-23 00:00:16.209565 | PLAY [Base pre] 2025-11-23 00:00:16.283605 | 2025-11-23 00:00:16.283741 | TASK [Setup log path fact] 2025-11-23 00:00:16.354861 | orchestrator | ok 2025-11-23 00:00:16.412972 | 2025-11-23 00:00:16.413126 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-23 00:00:16.501833 | orchestrator | ok 2025-11-23 00:00:16.543818 | 2025-11-23 00:00:16.543969 | TASK [emit-job-header : Print job information] 2025-11-23 00:00:16.680642 | # Job Information 2025-11-23 00:00:16.680845 | Ansible Version: 2.16.14 2025-11-23 00:00:16.680882 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-11-23 00:00:16.680916 | Pipeline: periodic-midnight 2025-11-23 00:00:16.680939 | Executor: 521e9411259a 2025-11-23 00:00:16.680959 | Triggered by: https://github.com/osism/testbed 2025-11-23 00:00:16.680981 | Event ID: 126471c5f3cd46e89f14d76c4d5eef65 2025-11-23 00:00:16.706968 | 2025-11-23 00:00:16.707191 | LOOP [emit-job-header : Print node information] 2025-11-23 00:00:17.138782 | orchestrator | ok: 2025-11-23 00:00:17.139008 | orchestrator | # Node Information 2025-11-23 00:00:17.139044 | orchestrator | Inventory Hostname: orchestrator 2025-11-23 00:00:17.139069 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-11-23 00:00:17.139091 | orchestrator | Username: zuul-testbed06 2025-11-23 00:00:17.139112 | orchestrator | Distro: Debian 12.12 2025-11-23 00:00:17.139136 | orchestrator | Provider: static-testbed 2025-11-23 00:00:17.139156 | orchestrator | Region: 2025-11-23 00:00:17.139178 | orchestrator | Label: testbed-orchestrator 2025-11-23 00:00:17.139198 | orchestrator | Product Name: OpenStack Nova 2025-11-23 00:00:17.139217 | orchestrator | Interface IP: 81.163.193.140 2025-11-23 00:00:17.167890 | 2025-11-23 00:00:17.168051 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-11-23 00:00:18.526443 | orchestrator -> localhost | changed 2025-11-23 00:00:18.539363 | 2025-11-23 00:00:18.541768 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-11-23 00:00:22.360360 | orchestrator -> localhost | changed 2025-11-23 00:00:22.376723 | 2025-11-23 00:00:22.376831 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-11-23 00:00:23.041693 | orchestrator -> localhost | ok 2025-11-23 00:00:23.048940 | 2025-11-23 00:00:23.049054 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-11-23 00:00:23.087695 | orchestrator | ok 2025-11-23 00:00:23.145734 | orchestrator | included: /var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-11-23 00:00:23.177234 | 2025-11-23 00:00:23.177372 | TASK [add-build-sshkey : Create Temp SSH key] 2025-11-23 00:00:26.989907 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-11-23 00:00:26.990069 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/work/14aeca2d96864489b1e086b610ab7ca4_id_rsa 2025-11-23 00:00:26.990099 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/work/14aeca2d96864489b1e086b610ab7ca4_id_rsa.pub 2025-11-23 00:00:26.990120 | orchestrator -> localhost | The key fingerprint is: 2025-11-23 00:00:26.990140 | orchestrator -> localhost | SHA256:dAj7sYShGea96v0e2JA+FWHZ5uv3KVvUdqIksdl6PAw zuul-build-sshkey 2025-11-23 00:00:26.990158 | orchestrator -> localhost | The key's randomart image is: 2025-11-23 00:00:26.990184 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-11-23 00:00:26.990202 | orchestrator -> localhost | | o o oo | 2025-11-23 00:00:26.990219 | orchestrator -> localhost | | o = *.oo | 2025-11-23 00:00:26.990237 | orchestrator -> localhost | | + + *oo | 2025-11-23 00:00:26.990254 | orchestrator -> localhost | | * =.= . | 2025-11-23 00:00:26.990270 | orchestrator -> localhost | | + S E.o o +| 2025-11-23 00:00:26.990293 | orchestrator -> localhost | | o = .B o o.| 2025-11-23 00:00:26.990311 | orchestrator -> localhost | | . + o.. * . | 2025-11-23 00:00:26.990329 | orchestrator -> localhost | | . . . ...oo . | 2025-11-23 00:00:26.990370 | orchestrator -> localhost | | . .oo ..+o | 2025-11-23 00:00:26.990389 | orchestrator -> localhost | +----[SHA256]-----+ 2025-11-23 00:00:26.990438 | orchestrator -> localhost | ok: Runtime: 0:00:02.520287 2025-11-23 00:00:26.996566 | 2025-11-23 00:00:26.996658 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-11-23 00:00:27.034814 | orchestrator | ok 2025-11-23 00:00:27.062315 | orchestrator | included: /var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-11-23 00:00:27.081445 | 2025-11-23 00:00:27.081545 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-11-23 00:00:27.124467 | orchestrator | skipping: Conditional result was False 2025-11-23 00:00:27.131644 | 2025-11-23 00:00:27.131741 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-11-23 00:00:27.855093 | orchestrator | changed 2025-11-23 00:00:27.860100 | 2025-11-23 00:00:27.860185 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-11-23 00:00:28.190090 | orchestrator | ok 2025-11-23 00:00:28.200522 | 2025-11-23 00:00:28.200622 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-11-23 00:00:28.701057 | orchestrator | ok 2025-11-23 00:00:28.707686 | 2025-11-23 00:00:28.707780 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-11-23 00:00:29.238269 | orchestrator | ok 2025-11-23 00:00:29.243305 | 2025-11-23 00:00:29.243396 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-11-23 00:00:29.282084 | orchestrator | skipping: Conditional result was False 2025-11-23 00:00:29.296949 | 2025-11-23 00:00:29.297043 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-11-23 00:00:30.245529 | orchestrator -> localhost | changed 2025-11-23 00:00:30.268848 | 2025-11-23 00:00:30.268945 | TASK [add-build-sshkey : Add back temp key] 2025-11-23 00:00:31.182005 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/work/14aeca2d96864489b1e086b610ab7ca4_id_rsa (zuul-build-sshkey) 2025-11-23 00:00:31.182217 | orchestrator -> localhost | ok: Runtime: 0:00:00.023012 2025-11-23 00:00:31.195757 | 2025-11-23 00:00:31.195871 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-11-23 00:00:31.982892 | orchestrator | ok 2025-11-23 00:00:31.992399 | 2025-11-23 00:00:31.992521 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-11-23 00:00:32.053093 | orchestrator | skipping: Conditional result was False 2025-11-23 00:00:32.187885 | 2025-11-23 00:00:32.187999 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-11-23 00:00:32.858045 | orchestrator | ok 2025-11-23 00:00:32.891982 | 2025-11-23 00:00:32.892154 | TASK [validate-host : Define zuul_info_dir fact] 2025-11-23 00:00:32.951793 | orchestrator | ok 2025-11-23 00:00:32.965993 | 2025-11-23 00:00:32.966105 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-11-23 00:00:33.633951 | orchestrator -> localhost | ok 2025-11-23 00:00:33.641493 | 2025-11-23 00:00:33.641601 | TASK [validate-host : Collect information about the host] 2025-11-23 00:00:35.304263 | orchestrator | ok 2025-11-23 00:00:35.346601 | 2025-11-23 00:00:35.346726 | TASK [validate-host : Sanitize hostname] 2025-11-23 00:00:35.444030 | orchestrator | ok 2025-11-23 00:00:35.449739 | 2025-11-23 00:00:35.449845 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-11-23 00:00:36.856409 | orchestrator -> localhost | changed 2025-11-23 00:00:36.870417 | 2025-11-23 00:00:36.870739 | TASK [validate-host : Collect information about zuul worker] 2025-11-23 00:00:37.383819 | orchestrator | ok 2025-11-23 00:00:37.396970 | 2025-11-23 00:00:37.397084 | TASK [validate-host : Write out all zuul information for each host] 2025-11-23 00:00:38.811235 | orchestrator -> localhost | changed 2025-11-23 00:00:38.821555 | 2025-11-23 00:00:38.821656 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-11-23 00:00:39.154534 | orchestrator | ok 2025-11-23 00:00:39.166162 | 2025-11-23 00:00:39.166270 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-11-23 00:01:18.851000 | orchestrator | changed: 2025-11-23 00:01:18.851232 | orchestrator | .d..t...... src/ 2025-11-23 00:01:18.851268 | orchestrator | .d..t...... src/github.com/ 2025-11-23 00:01:18.851292 | orchestrator | .d..t...... src/github.com/osism/ 2025-11-23 00:01:18.851313 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-11-23 00:01:18.851468 | orchestrator | RedHat.yml 2025-11-23 00:01:18.866001 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-11-23 00:01:18.866019 | orchestrator | RedHat.yml 2025-11-23 00:01:18.866072 | orchestrator | = 2.2.0"... 2025-11-23 00:01:32.655068 | orchestrator | 00:01:32.654 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-11-23 00:01:32.673702 | orchestrator | 00:01:32.673 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-11-23 00:01:32.826704 | orchestrator | 00:01:32.826 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.4.0... 2025-11-23 00:01:33.685209 | orchestrator | 00:01:33.684 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2025-11-23 00:01:33.751689 | orchestrator | 00:01:33.751 STDOUT terraform: - Installing hashicorp/local v2.6.1... 2025-11-23 00:01:34.359132 | orchestrator | 00:01:34.358 STDOUT terraform: - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2025-11-23 00:01:34.438150 | orchestrator | 00:01:34.437 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-11-23 00:01:35.019112 | orchestrator | 00:01:35.018 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-11-23 00:01:35.019207 | orchestrator | 00:01:35.019 STDOUT terraform: Providers are signed by their developers. 2025-11-23 00:01:35.019219 | orchestrator | 00:01:35.019 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-11-23 00:01:35.019227 | orchestrator | 00:01:35.019 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-11-23 00:01:35.020453 | orchestrator | 00:01:35.019 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-11-23 00:01:35.020491 | orchestrator | 00:01:35.019 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-11-23 00:01:35.020499 | orchestrator | 00:01:35.019 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-11-23 00:01:35.020508 | orchestrator | 00:01:35.019 STDOUT terraform: you run "tofu init" in the future. 2025-11-23 00:01:35.020516 | orchestrator | 00:01:35.019 STDOUT terraform: OpenTofu has been successfully initialized! 2025-11-23 00:01:35.020523 | orchestrator | 00:01:35.019 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-11-23 00:01:35.020530 | orchestrator | 00:01:35.019 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-11-23 00:01:35.020543 | orchestrator | 00:01:35.020 STDOUT terraform: should now work. 2025-11-23 00:01:35.020550 | orchestrator | 00:01:35.020 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-11-23 00:01:35.020557 | orchestrator | 00:01:35.020 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-11-23 00:01:35.020565 | orchestrator | 00:01:35.020 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-11-23 00:01:35.348782 | orchestrator | 00:01:35.348 STDOUT terraform: Created and switched to workspace "ci"! 2025-11-23 00:01:35.348866 | orchestrator | 00:01:35.348 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-11-23 00:01:35.348883 | orchestrator | 00:01:35.348 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-11-23 00:01:35.348889 | orchestrator | 00:01:35.348 STDOUT terraform: for this configuration. 2025-11-23 00:01:35.679459 | orchestrator | 00:01:35.679 STDOUT terraform: ci.auto.tfvars 2025-11-23 00:01:35.682543 | orchestrator | 00:01:35.682 STDOUT terraform: default_custom.tf 2025-11-23 00:01:36.743822 | orchestrator | 00:01:36.743 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-11-23 00:01:37.261865 | orchestrator | 00:01:37.261 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-11-23 00:01:37.628463 | orchestrator | 00:01:37.628 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-11-23 00:01:37.628565 | orchestrator | 00:01:37.628 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-11-23 00:01:37.628581 | orchestrator | 00:01:37.628 STDOUT terraform:  + create 2025-11-23 00:01:37.628605 | orchestrator | 00:01:37.628 STDOUT terraform:  <= read (data resources) 2025-11-23 00:01:37.628619 | orchestrator | 00:01:37.628 STDOUT terraform: OpenTofu will perform the following actions: 2025-11-23 00:01:37.628664 | orchestrator | 00:01:37.628 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-11-23 00:01:37.628933 | orchestrator | 00:01:37.628 STDOUT terraform:  # (config refers to values not yet known) 2025-11-23 00:01:37.628967 | orchestrator | 00:01:37.628 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-11-23 00:01:37.628987 | orchestrator | 00:01:37.628 STDOUT terraform:  + checksum = (known after apply) 2025-11-23 00:01:37.629105 | orchestrator | 00:01:37.628 STDOUT terraform:  + created_at = (known after apply) 2025-11-23 00:01:37.629119 | orchestrator | 00:01:37.629 STDOUT terraform:  + file = (known after apply) 2025-11-23 00:01:37.629137 | orchestrator | 00:01:37.629 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.629323 | orchestrator | 00:01:37.629 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.629391 | orchestrator | 00:01:37.629 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-23 00:01:37.629410 | orchestrator | 00:01:37.629 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-23 00:01:37.629640 | orchestrator | 00:01:37.629 STDOUT terraform:  + most_recent = true 2025-11-23 00:01:37.629661 | orchestrator | 00:01:37.629 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.629668 | orchestrator | 00:01:37.629 STDOUT terraform:  + protected = (known after apply) 2025-11-23 00:01:37.629673 | orchestrator | 00:01:37.629 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.629680 | orchestrator | 00:01:37.629 STDOUT terraform:  + schema = (known after apply) 2025-11-23 00:01:37.629689 | orchestrator | 00:01:37.629 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-23 00:01:37.629696 | orchestrator | 00:01:37.629 STDOUT terraform:  + tags = (known after apply) 2025-11-23 00:01:37.629765 | orchestrator | 00:01:37.629 STDOUT terraform:  + updated_at = (known after apply) 2025-11-23 00:01:37.629809 | orchestrator | 00:01:37.629 STDOUT terraform:  } 2025-11-23 00:01:37.629887 | orchestrator | 00:01:37.629 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-11-23 00:01:37.629900 | orchestrator | 00:01:37.629 STDOUT terraform:  # (config refers to values not yet known) 2025-11-23 00:01:37.630001 | orchestrator | 00:01:37.629 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-11-23 00:01:37.630046 | orchestrator | 00:01:37.629 STDOUT terraform:  + checksum = (known after apply) 2025-11-23 00:01:37.630266 | orchestrator | 00:01:37.629 STDOUT terraform:  + created_at = (known after apply) 2025-11-23 00:01:37.630285 | orchestrator | 00:01:37.630 STDOUT terraform:  + file = (known after apply) 2025-11-23 00:01:37.630301 | orchestrator | 00:01:37.630 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.630311 | orchestrator | 00:01:37.630 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.630322 | orchestrator | 00:01:37.630 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-23 00:01:37.630329 | orchestrator | 00:01:37.630 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-23 00:01:37.630343 | orchestrator | 00:01:37.630 STDOUT terraform:  + most_recent = true 2025-11-23 00:01:37.630407 | orchestrator | 00:01:37.630 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.630426 | orchestrator | 00:01:37.630 STDOUT terraform:  + protected = (known after apply) 2025-11-23 00:01:37.630488 | orchestrator | 00:01:37.630 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.630543 | orchestrator | 00:01:37.630 STDOUT terraform:  + schema = (known after apply) 2025-11-23 00:01:37.630670 | orchestrator | 00:01:37.630 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-23 00:01:37.630683 | orchestrator | 00:01:37.630 STDOUT terraform:  + tags = (known after apply) 2025-11-23 00:01:37.630700 | orchestrator | 00:01:37.630 STDOUT terraform:  + updated_at = (known after apply) 2025-11-23 00:01:37.630713 | orchestrator | 00:01:37.630 STDOUT terraform:  } 2025-11-23 00:01:37.630813 | orchestrator | 00:01:37.630 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-11-23 00:01:37.630827 | orchestrator | 00:01:37.630 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-11-23 00:01:37.630926 | orchestrator | 00:01:37.630 STDOUT terraform:  + content = (known after apply) 2025-11-23 00:01:37.630954 | orchestrator | 00:01:37.630 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-23 00:01:37.630970 | orchestrator | 00:01:37.630 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-23 00:01:37.631047 | orchestrator | 00:01:37.630 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-23 00:01:37.631093 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-23 00:01:37.631207 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-23 00:01:37.631295 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-23 00:01:37.631307 | orchestrator | 00:01:37.631 STDOUT terraform:  + directory_permission = "0777" 2025-11-23 00:01:37.631329 | orchestrator | 00:01:37.631 STDOUT terraform:  + file_permission = "0644" 2025-11-23 00:01:37.631380 | orchestrator | 00:01:37.631 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-11-23 00:01:37.631444 | orchestrator | 00:01:37.631 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.631454 | orchestrator | 00:01:37.631 STDOUT terraform:  } 2025-11-23 00:01:37.631511 | orchestrator | 00:01:37.631 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-11-23 00:01:37.631555 | orchestrator | 00:01:37.631 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-11-23 00:01:37.631605 | orchestrator | 00:01:37.631 STDOUT terraform:  + content = (known after apply) 2025-11-23 00:01:37.631727 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-23 00:01:37.631737 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-23 00:01:37.631783 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-23 00:01:37.631866 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-23 00:01:37.631922 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-23 00:01:37.631974 | orchestrator | 00:01:37.631 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-23 00:01:37.631997 | orchestrator | 00:01:37.631 STDOUT terraform:  + directory_permission = "0777" 2025-11-23 00:01:37.632105 | orchestrator | 00:01:37.631 STDOUT terraform:  + file_permission = "0644" 2025-11-23 00:01:37.632115 | orchestrator | 00:01:37.632 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-11-23 00:01:37.632163 | orchestrator | 00:01:37.632 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.632174 | orchestrator | 00:01:37.632 STDOUT terraform:  } 2025-11-23 00:01:37.632237 | orchestrator | 00:01:37.632 STDOUT terraform:  # local_file.inventory will be created 2025-11-23 00:01:37.632290 | orchestrator | 00:01:37.632 STDOUT terraform:  + resource "local_file" "inventory" { 2025-11-23 00:01:37.632481 | orchestrator | 00:01:37.632 STDOUT terraform:  + content = (known after apply) 2025-11-23 00:01:37.632492 | orchestrator | 00:01:37.632 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-23 00:01:37.632597 | orchestrator | 00:01:37.632 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-23 00:01:37.632651 | orchestrator | 00:01:37.632 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-23 00:01:37.632725 | orchestrator | 00:01:37.632 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-23 00:01:37.632771 | orchestrator | 00:01:37.632 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-23 00:01:37.632823 | orchestrator | 00:01:37.632 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-23 00:01:37.633019 | orchestrator | 00:01:37.632 STDOUT terraform:  + directory_permission = "0777" 2025-11-23 00:01:37.633033 | orchestrator | 00:01:37.632 STDOUT terraform:  + file_permission = "0644" 2025-11-23 00:01:37.633042 | orchestrator | 00:01:37.632 STDOUT terraform:  + filename = "inventory.ci" 2025-11-23 00:01:37.633112 | orchestrator | 00:01:37.633 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.633123 | orchestrator | 00:01:37.633 STDOUT terraform:  } 2025-11-23 00:01:37.633168 | orchestrator | 00:01:37.633 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-11-23 00:01:37.633239 | orchestrator | 00:01:37.633 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-11-23 00:01:37.633294 | orchestrator | 00:01:37.633 STDOUT terraform:  + content = (sensitive value) 2025-11-23 00:01:37.633358 | orchestrator | 00:01:37.633 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-23 00:01:37.633434 | orchestrator | 00:01:37.633 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-23 00:01:37.633476 | orchestrator | 00:01:37.633 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-23 00:01:37.633549 | orchestrator | 00:01:37.633 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-23 00:01:37.633604 | orchestrator | 00:01:37.633 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-23 00:01:37.633668 | orchestrator | 00:01:37.633 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-23 00:01:37.633706 | orchestrator | 00:01:37.633 STDOUT terraform:  + directory_permission = "0700" 2025-11-23 00:01:37.633751 | orchestrator | 00:01:37.633 STDOUT terraform:  + file_permission = "0600" 2025-11-23 00:01:37.633800 | orchestrator | 00:01:37.633 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-11-23 00:01:37.633867 | orchestrator | 00:01:37.633 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.633873 | orchestrator | 00:01:37.633 STDOUT terraform:  } 2025-11-23 00:01:37.633930 | orchestrator | 00:01:37.633 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-11-23 00:01:37.633980 | orchestrator | 00:01:37.633 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-11-23 00:01:37.634061 | orchestrator | 00:01:37.633 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.634067 | orchestrator | 00:01:37.634 STDOUT terraform:  } 2025-11-23 00:01:37.634144 | orchestrator | 00:01:37.634 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-11-23 00:01:37.634300 | orchestrator | 00:01:37.634 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-11-23 00:01:37.634365 | orchestrator | 00:01:37.634 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.634406 | orchestrator | 00:01:37.634 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.634457 | orchestrator | 00:01:37.634 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.634521 | orchestrator | 00:01:37.634 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.634600 | orchestrator | 00:01:37.634 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.634685 | orchestrator | 00:01:37.634 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-11-23 00:01:37.634729 | orchestrator | 00:01:37.634 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.634762 | orchestrator | 00:01:37.634 STDOUT terraform:  + size = 80 2025-11-23 00:01:37.634806 | orchestrator | 00:01:37.634 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.634843 | orchestrator | 00:01:37.634 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.634881 | orchestrator | 00:01:37.634 STDOUT terraform:  } 2025-11-23 00:01:37.634971 | orchestrator | 00:01:37.634 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-11-23 00:01:37.635072 | orchestrator | 00:01:37.634 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-23 00:01:37.635104 | orchestrator | 00:01:37.635 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.635145 | orchestrator | 00:01:37.635 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.635228 | orchestrator | 00:01:37.635 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.635290 | orchestrator | 00:01:37.635 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.635346 | orchestrator | 00:01:37.635 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.635420 | orchestrator | 00:01:37.635 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-11-23 00:01:37.635485 | orchestrator | 00:01:37.635 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.635529 | orchestrator | 00:01:37.635 STDOUT terraform:  + size = 80 2025-11-23 00:01:37.635573 | orchestrator | 00:01:37.635 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.635616 | orchestrator | 00:01:37.635 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.635623 | orchestrator | 00:01:37.635 STDOUT terraform:  } 2025-11-23 00:01:37.635763 | orchestrator | 00:01:37.635 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-11-23 00:01:37.635810 | orchestrator | 00:01:37.635 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-23 00:01:37.635860 | orchestrator | 00:01:37.635 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.635909 | orchestrator | 00:01:37.635 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.635986 | orchestrator | 00:01:37.635 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.636050 | orchestrator | 00:01:37.635 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.636110 | orchestrator | 00:01:37.636 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.636201 | orchestrator | 00:01:37.636 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-11-23 00:01:37.636260 | orchestrator | 00:01:37.636 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.636299 | orchestrator | 00:01:37.636 STDOUT terraform:  + size = 80 2025-11-23 00:01:37.636338 | orchestrator | 00:01:37.636 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.636376 | orchestrator | 00:01:37.636 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.636388 | orchestrator | 00:01:37.636 STDOUT terraform:  } 2025-11-23 00:01:37.636488 | orchestrator | 00:01:37.636 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-11-23 00:01:37.636615 | orchestrator | 00:01:37.636 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-23 00:01:37.636629 | orchestrator | 00:01:37.636 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.636696 | orchestrator | 00:01:37.636 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.636745 | orchestrator | 00:01:37.636 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.636825 | orchestrator | 00:01:37.636 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.636916 | orchestrator | 00:01:37.636 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.637061 | orchestrator | 00:01:37.636 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-11-23 00:01:37.637072 | orchestrator | 00:01:37.636 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.637081 | orchestrator | 00:01:37.637 STDOUT terraform:  + size = 80 2025-11-23 00:01:37.637151 | orchestrator | 00:01:37.637 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.637243 | orchestrator | 00:01:37.637 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.637253 | orchestrator | 00:01:37.637 STDOUT terraform:  } 2025-11-23 00:01:37.637296 | orchestrator | 00:01:37.637 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-11-23 00:01:37.637377 | orchestrator | 00:01:37.637 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-23 00:01:37.637503 | orchestrator | 00:01:37.637 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.637515 | orchestrator | 00:01:37.637 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.637545 | orchestrator | 00:01:37.637 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.637612 | orchestrator | 00:01:37.637 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.637662 | orchestrator | 00:01:37.637 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.637814 | orchestrator | 00:01:37.637 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-11-23 00:01:37.637826 | orchestrator | 00:01:37.637 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.637835 | orchestrator | 00:01:37.637 STDOUT terraform:  + size = 80 2025-11-23 00:01:37.637878 | orchestrator | 00:01:37.637 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.637927 | orchestrator | 00:01:37.637 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.637939 | orchestrator | 00:01:37.637 STDOUT terraform:  } 2025-11-23 00:01:37.638241 | orchestrator | 00:01:37.637 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-11-23 00:01:37.638306 | orchestrator | 00:01:37.638 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-23 00:01:37.638455 | orchestrator | 00:01:37.638 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.638477 | orchestrator | 00:01:37.638 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.638487 | orchestrator | 00:01:37.638 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.638626 | orchestrator | 00:01:37.638 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.638636 | orchestrator | 00:01:37.638 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.638698 | orchestrator | 00:01:37.638 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-11-23 00:01:37.638763 | orchestrator | 00:01:37.638 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.638829 | orchestrator | 00:01:37.638 STDOUT terraform:  + size = 80 2025-11-23 00:01:37.638841 | orchestrator | 00:01:37.638 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.638942 | orchestrator | 00:01:37.638 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.638952 | orchestrator | 00:01:37.638 STDOUT terraform:  } 2025-11-23 00:01:37.638996 | orchestrator | 00:01:37.638 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-11-23 00:01:37.639085 | orchestrator | 00:01:37.638 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-23 00:01:37.639142 | orchestrator | 00:01:37.639 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.639274 | orchestrator | 00:01:37.639 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.639286 | orchestrator | 00:01:37.639 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.639344 | orchestrator | 00:01:37.639 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.639410 | orchestrator | 00:01:37.639 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.639476 | orchestrator | 00:01:37.639 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-11-23 00:01:37.639595 | orchestrator | 00:01:37.639 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.639605 | orchestrator | 00:01:37.639 STDOUT terraform:  + size = 80 2025-11-23 00:01:37.639615 | orchestrator | 00:01:37.639 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.639704 | orchestrator | 00:01:37.639 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.639712 | orchestrator | 00:01:37.639 STDOUT terraform:  } 2025-11-23 00:01:37.639862 | orchestrator | 00:01:37.639 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-11-23 00:01:37.639874 | orchestrator | 00:01:37.639 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.639884 | orchestrator | 00:01:37.639 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.639926 | orchestrator | 00:01:37.639 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.640029 | orchestrator | 00:01:37.639 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.640043 | orchestrator | 00:01:37.639 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.640151 | orchestrator | 00:01:37.640 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-11-23 00:01:37.640206 | orchestrator | 00:01:37.640 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.640235 | orchestrator | 00:01:37.640 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.640278 | orchestrator | 00:01:37.640 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.640367 | orchestrator | 00:01:37.640 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.640376 | orchestrator | 00:01:37.640 STDOUT terraform:  } 2025-11-23 00:01:37.640415 | orchestrator | 00:01:37.640 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-11-23 00:01:37.640510 | orchestrator | 00:01:37.640 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.640556 | orchestrator | 00:01:37.640 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.640650 | orchestrator | 00:01:37.640 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.640673 | orchestrator | 00:01:37.640 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.640747 | orchestrator | 00:01:37.640 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.640812 | orchestrator | 00:01:37.640 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-11-23 00:01:37.640888 | orchestrator | 00:01:37.640 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.640898 | orchestrator | 00:01:37.640 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.640978 | orchestrator | 00:01:37.640 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.640988 | orchestrator | 00:01:37.640 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.640997 | orchestrator | 00:01:37.640 STDOUT terraform:  } 2025-11-23 00:01:37.641090 | orchestrator | 00:01:37.640 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-11-23 00:01:37.641149 | orchestrator | 00:01:37.641 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.641266 | orchestrator | 00:01:37.641 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.641276 | orchestrator | 00:01:37.641 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.641350 | orchestrator | 00:01:37.641 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.641398 | orchestrator | 00:01:37.641 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.641463 | orchestrator | 00:01:37.641 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-11-23 00:01:37.641561 | orchestrator | 00:01:37.641 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.641722 | orchestrator | 00:01:37.641 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.641732 | orchestrator | 00:01:37.641 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.641740 | orchestrator | 00:01:37.641 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.641747 | orchestrator | 00:01:37.641 STDOUT terraform:  } 2025-11-23 00:01:37.641799 | orchestrator | 00:01:37.641 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-11-23 00:01:37.641997 | orchestrator | 00:01:37.641 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.642007 | orchestrator | 00:01:37.641 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.642035 | orchestrator | 00:01:37.641 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.642069 | orchestrator | 00:01:37.641 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.642174 | orchestrator | 00:01:37.642 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.642383 | orchestrator | 00:01:37.642 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-11-23 00:01:37.642509 | orchestrator | 00:01:37.642 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.642520 | orchestrator | 00:01:37.642 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.642564 | orchestrator | 00:01:37.642 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.642645 | orchestrator | 00:01:37.642 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.642654 | orchestrator | 00:01:37.642 STDOUT terraform:  } 2025-11-23 00:01:37.642751 | orchestrator | 00:01:37.642 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-11-23 00:01:37.642964 | orchestrator | 00:01:37.642 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.642974 | orchestrator | 00:01:37.642 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.642981 | orchestrator | 00:01:37.642 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.642988 | orchestrator | 00:01:37.642 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.642997 | orchestrator | 00:01:37.642 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.643069 | orchestrator | 00:01:37.642 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-11-23 00:01:37.643114 | orchestrator | 00:01:37.643 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.643164 | orchestrator | 00:01:37.643 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.643175 | orchestrator | 00:01:37.643 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.643269 | orchestrator | 00:01:37.643 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.643278 | orchestrator | 00:01:37.643 STDOUT terraform:  } 2025-11-23 00:01:37.643358 | orchestrator | 00:01:37.643 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-11-23 00:01:37.643438 | orchestrator | 00:01:37.643 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.643502 | orchestrator | 00:01:37.643 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.643565 | orchestrator | 00:01:37.643 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.643630 | orchestrator | 00:01:37.643 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.643717 | orchestrator | 00:01:37.643 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.643736 | orchestrator | 00:01:37.643 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-11-23 00:01:37.643812 | orchestrator | 00:01:37.643 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.643820 | orchestrator | 00:01:37.643 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.643857 | orchestrator | 00:01:37.643 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.643881 | orchestrator | 00:01:37.643 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.643894 | orchestrator | 00:01:37.643 STDOUT terraform:  } 2025-11-23 00:01:37.644002 | orchestrator | 00:01:37.643 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-11-23 00:01:37.644051 | orchestrator | 00:01:37.643 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.644118 | orchestrator | 00:01:37.644 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.644172 | orchestrator | 00:01:37.644 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.644268 | orchestrator | 00:01:37.644 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.644330 | orchestrator | 00:01:37.644 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.644427 | orchestrator | 00:01:37.644 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-11-23 00:01:37.644467 | orchestrator | 00:01:37.644 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.644478 | orchestrator | 00:01:37.644 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.644525 | orchestrator | 00:01:37.644 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.644566 | orchestrator | 00:01:37.644 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.644656 | orchestrator | 00:01:37.644 STDOUT terraform:  } 2025-11-23 00:01:37.644677 | orchestrator | 00:01:37.644 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-11-23 00:01:37.644741 | orchestrator | 00:01:37.644 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.644787 | orchestrator | 00:01:37.644 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.644848 | orchestrator | 00:01:37.644 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.644886 | orchestrator | 00:01:37.644 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.644964 | orchestrator | 00:01:37.644 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.645077 | orchestrator | 00:01:37.644 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-11-23 00:01:37.645133 | orchestrator | 00:01:37.645 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.645141 | orchestrator | 00:01:37.645 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.645181 | orchestrator | 00:01:37.645 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.645236 | orchestrator | 00:01:37.645 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.645244 | orchestrator | 00:01:37.645 STDOUT terraform:  } 2025-11-23 00:01:37.645340 | orchestrator | 00:01:37.645 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-11-23 00:01:37.645421 | orchestrator | 00:01:37.645 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-23 00:01:37.645554 | orchestrator | 00:01:37.645 STDOUT terraform:  + attachment = (known after apply) 2025-11-23 00:01:37.645563 | orchestrator | 00:01:37.645 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.645570 | orchestrator | 00:01:37.645 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.645614 | orchestrator | 00:01:37.645 STDOUT terraform:  + metadata = (known after apply) 2025-11-23 00:01:37.645661 | orchestrator | 00:01:37.645 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-11-23 00:01:37.645777 | orchestrator | 00:01:37.645 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.645788 | orchestrator | 00:01:37.645 STDOUT terraform:  + size = 20 2025-11-23 00:01:37.645798 | orchestrator | 00:01:37.645 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-23 00:01:37.645883 | orchestrator | 00:01:37.645 STDOUT terraform:  + volume_type = "ssd" 2025-11-23 00:01:37.645892 | orchestrator | 00:01:37.645 STDOUT terraform:  } 2025-11-23 00:01:37.645971 | orchestrator | 00:01:37.645 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-11-23 00:01:37.645981 | orchestrator | 00:01:37.645 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-11-23 00:01:37.646076 | orchestrator | 00:01:37.645 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-23 00:01:37.646160 | orchestrator | 00:01:37.646 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-23 00:01:37.646208 | orchestrator | 00:01:37.646 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-23 00:01:37.646271 | orchestrator | 00:01:37.646 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.646315 | orchestrator | 00:01:37.646 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.646326 | orchestrator | 00:01:37.646 STDOUT terraform:  + config_drive = true 2025-11-23 00:01:37.646407 | orchestrator | 00:01:37.646 STDOUT terraform:  + created = (known after apply) 2025-11-23 00:01:37.646445 | orchestrator | 00:01:37.646 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-23 00:01:37.646491 | orchestrator | 00:01:37.646 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-11-23 00:01:37.646546 | orchestrator | 00:01:37.646 STDOUT terraform:  + force_delete = false 2025-11-23 00:01:37.646591 | orchestrator | 00:01:37.646 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-23 00:01:37.646659 | orchestrator | 00:01:37.646 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.646812 | orchestrator | 00:01:37.646 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.646821 | orchestrator | 00:01:37.646 STDOUT terraform:  + image_name = (known after apply) 2025-11-23 00:01:37.646841 | orchestrator | 00:01:37.646 STDOUT terraform:  + key_pair = "testbed" 2025-11-23 00:01:37.646859 | orchestrator | 00:01:37.646 STDOUT terraform:  + name = "testbed-manager" 2025-11-23 00:01:37.646913 | orchestrator | 00:01:37.646 STDOUT terraform:  + power_state = "active" 2025-11-23 00:01:37.646983 | orchestrator | 00:01:37.646 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.647028 | orchestrator | 00:01:37.646 STDOUT terraform:  + security_groups = (known after apply) 2025-11-23 00:01:37.647084 | orchestrator | 00:01:37.647 STDOUT terraform:  + stop_before_destroy = false 2025-11-23 00:01:37.647137 | orchestrator | 00:01:37.647 STDOUT terraform:  + updated = (known after apply) 2025-11-23 00:01:37.647204 | orchestrator | 00:01:37.647 STDOUT terraform:  + user_data = (sensitive value) 2025-11-23 00:01:37.647340 | orchestrator | 00:01:37.647 STDOUT terraform:  + block_device { 2025-11-23 00:01:37.647349 | orchestrator | 00:01:37.647 STDOUT terraform:  + boot_index = 0 2025-11-23 00:01:37.647358 | orchestrator | 00:01:37.647 STDOUT terraform:  + delete_on_termination = false 2025-11-23 00:01:37.647412 | orchestrator | 00:01:37.647 STDOUT terraform:  + destination_type = "volume" 2025-11-23 00:01:37.647456 | orchestrator | 00:01:37.647 STDOUT terraform:  + multiattach = false 2025-11-23 00:01:37.647503 | orchestrator | 00:01:37.647 STDOUT terraform:  + source_type = "volume" 2025-11-23 00:01:37.647545 | orchestrator | 00:01:37.647 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.647555 | orchestrator | 00:01:37.647 STDOUT terraform:  } 2025-11-23 00:01:37.647584 | orchestrator | 00:01:37.647 STDOUT terraform:  + network { 2025-11-23 00:01:37.647626 | orchestrator | 00:01:37.647 STDOUT terraform:  + access_network = false 2025-11-23 00:01:37.647688 | orchestrator | 00:01:37.647 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-23 00:01:37.647699 | orchestrator | 00:01:37.647 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-23 00:01:37.647817 | orchestrator | 00:01:37.647 STDOUT terraform:  + mac = (known after apply) 2025-11-23 00:01:37.647825 | orchestrator | 00:01:37.647 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.647835 | orchestrator | 00:01:37.647 STDOUT terraform:  + port = (known after apply) 2025-11-23 00:01:37.647956 | orchestrator | 00:01:37.647 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.647965 | orchestrator | 00:01:37.647 STDOUT terraform:  } 2025-11-23 00:01:37.647973 | orchestrator | 00:01:37.647 STDOUT terraform:  } 2025-11-23 00:01:37.647982 | orchestrator | 00:01:37.647 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-11-23 00:01:37.648077 | orchestrator | 00:01:37.647 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-23 00:01:37.648157 | orchestrator | 00:01:37.648 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-23 00:01:37.648166 | orchestrator | 00:01:37.648 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-23 00:01:37.648233 | orchestrator | 00:01:37.648 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-23 00:01:37.648300 | orchestrator | 00:01:37.648 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.648312 | orchestrator | 00:01:37.648 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.648369 | orchestrator | 00:01:37.648 STDOUT terraform:  + config_drive = true 2025-11-23 00:01:37.648419 | orchestrator | 00:01:37.648 STDOUT terraform:  + created = (known after apply) 2025-11-23 00:01:37.648483 | orchestrator | 00:01:37.648 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-23 00:01:37.648493 | orchestrator | 00:01:37.648 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-23 00:01:37.648549 | orchestrator | 00:01:37.648 STDOUT terraform:  + force_delete = false 2025-11-23 00:01:37.648613 | orchestrator | 00:01:37.648 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-23 00:01:37.648669 | orchestrator | 00:01:37.648 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.648790 | orchestrator | 00:01:37.648 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.648800 | orchestrator | 00:01:37.648 STDOUT terraform:  + image_name = (known after apply) 2025-11-23 00:01:37.648807 | orchestrator | 00:01:37.648 STDOUT terraform:  + key_pair = "testbed" 2025-11-23 00:01:37.648848 | orchestrator | 00:01:37.648 STDOUT terraform:  + name = "testbed-node-0" 2025-11-23 00:01:37.648895 | orchestrator | 00:01:37.648 STDOUT terraform:  + power_state = "active" 2025-11-23 00:01:37.649013 | orchestrator | 00:01:37.648 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.649030 | orchestrator | 00:01:37.648 STDOUT terraform:  + security_groups = (known after apply) 2025-11-23 00:01:37.649037 | orchestrator | 00:01:37.648 STDOUT terraform:  + stop_before_destroy = false 2025-11-23 00:01:37.649084 | orchestrator | 00:01:37.649 STDOUT terraform:  + updated = (known after apply) 2025-11-23 00:01:37.649149 | orchestrator | 00:01:37.649 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-23 00:01:37.649160 | orchestrator | 00:01:37.649 STDOUT terraform:  + block_device { 2025-11-23 00:01:37.649243 | orchestrator | 00:01:37.649 STDOUT terraform:  + boot_index = 0 2025-11-23 00:01:37.649289 | orchestrator | 00:01:37.649 STDOUT terraform:  + delete_on_termination = false 2025-11-23 00:01:37.649356 | orchestrator | 00:01:37.649 STDOUT terraform:  + destination_type = "volume" 2025-11-23 00:01:37.649364 | orchestrator | 00:01:37.649 STDOUT terraform:  + multiattach = false 2025-11-23 00:01:37.649433 | orchestrator | 00:01:37.649 STDOUT terraform:  + source_type = "volume" 2025-11-23 00:01:37.649445 | orchestrator | 00:01:37.649 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.649454 | orchestrator | 00:01:37.649 STDOUT terraform:  } 2025-11-23 00:01:37.649477 | orchestrator | 00:01:37.649 STDOUT terraform:  + network { 2025-11-23 00:01:37.649504 | orchestrator | 00:01:37.649 STDOUT terraform:  + access_network = false 2025-11-23 00:01:37.649561 | orchestrator | 00:01:37.649 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-23 00:01:37.649601 | orchestrator | 00:01:37.649 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-23 00:01:37.649698 | orchestrator | 00:01:37.649 STDOUT terraform:  + mac = (known after apply) 2025-11-23 00:01:37.649708 | orchestrator | 00:01:37.649 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.649716 | orchestrator | 00:01:37.649 STDOUT terraform:  + port = (known after apply) 2025-11-23 00:01:37.649759 | orchestrator | 00:01:37.649 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.649769 | orchestrator | 00:01:37.649 STDOUT terraform:  } 2025-11-23 00:01:37.649790 | orchestrator | 00:01:37.649 STDOUT terraform:  } 2025-11-23 00:01:37.649910 | orchestrator | 00:01:37.649 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-11-23 00:01:37.649921 | orchestrator | 00:01:37.649 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-23 00:01:37.649968 | orchestrator | 00:01:37.649 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-23 00:01:37.650078 | orchestrator | 00:01:37.649 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-23 00:01:37.650092 | orchestrator | 00:01:37.649 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-23 00:01:37.650146 | orchestrator | 00:01:37.650 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.650246 | orchestrator | 00:01:37.650 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.650257 | orchestrator | 00:01:37.650 STDOUT terraform:  + config_drive = true 2025-11-23 00:01:37.650385 | orchestrator | 00:01:37.650 STDOUT terraform:  + created = (known after apply) 2025-11-23 00:01:37.650398 | orchestrator | 00:01:37.650 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-23 00:01:37.650405 | orchestrator | 00:01:37.650 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-23 00:01:37.650414 | orchestrator | 00:01:37.650 STDOUT terraform:  + force_delete = false 2025-11-23 00:01:37.650481 | orchestrator | 00:01:37.650 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-23 00:01:37.650494 | orchestrator | 00:01:37.650 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.650539 | orchestrator | 00:01:37.650 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.650597 | orchestrator | 00:01:37.650 STDOUT terraform:  + image_name = (known after apply) 2025-11-23 00:01:37.650607 | orchestrator | 00:01:37.650 STDOUT terraform:  + key_pair = "testbed" 2025-11-23 00:01:37.650658 | orchestrator | 00:01:37.650 STDOUT terraform:  + name = "testbed-node-1" 2025-11-23 00:01:37.650766 | orchestrator | 00:01:37.650 STDOUT terraform:  + power_state = "active" 2025-11-23 00:01:37.650777 | orchestrator | 00:01:37.650 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.650786 | orchestrator | 00:01:37.650 STDOUT terraform:  + security_groups = (known after apply) 2025-11-23 00:01:37.650795 | orchestrator | 00:01:37.650 STDOUT terraform:  + stop_before_destroy = false 2025-11-23 00:01:37.650866 | orchestrator | 00:01:37.650 STDOUT terraform:  + updated = (known after apply) 2025-11-23 00:01:37.650943 | orchestrator | 00:01:37.650 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-23 00:01:37.650956 | orchestrator | 00:01:37.650 STDOUT terraform:  + block_device { 2025-11-23 00:01:37.651093 | orchestrator | 00:01:37.650 STDOUT terraform:  + boot_index = 0 2025-11-23 00:01:37.651104 | orchestrator | 00:01:37.650 STDOUT terraform:  + delete_on_termination = false 2025-11-23 00:01:37.651111 | orchestrator | 00:01:37.651 STDOUT terraform:  + destination_type = "volume" 2025-11-23 00:01:37.651119 | orchestrator | 00:01:37.651 STDOUT terraform:  + multiattach = false 2025-11-23 00:01:37.651128 | orchestrator | 00:01:37.651 STDOUT terraform:  + source_type = "volume" 2025-11-23 00:01:37.651236 | orchestrator | 00:01:37.651 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.651247 | orchestrator | 00:01:37.651 STDOUT terraform:  } 2025-11-23 00:01:37.651256 | orchestrator | 00:01:37.651 STDOUT terraform:  + network { 2025-11-23 00:01:37.651263 | orchestrator | 00:01:37.651 STDOUT terraform:  + access_network = false 2025-11-23 00:01:37.651300 | orchestrator | 00:01:37.651 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-23 00:01:37.651370 | orchestrator | 00:01:37.651 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-23 00:01:37.651381 | orchestrator | 00:01:37.651 STDOUT terraform:  + mac = (known after apply) 2025-11-23 00:01:37.651457 | orchestrator | 00:01:37.651 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.651481 | orchestrator | 00:01:37.651 STDOUT terraform:  + port = (known after apply) 2025-11-23 00:01:37.651491 | orchestrator | 00:01:37.651 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.651513 | orchestrator | 00:01:37.651 STDOUT terraform:  } 2025-11-23 00:01:37.651526 | orchestrator | 00:01:37.651 STDOUT terraform:  } 2025-11-23 00:01:37.651644 | orchestrator | 00:01:37.651 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-11-23 00:01:37.651655 | orchestrator | 00:01:37.651 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-23 00:01:37.651734 | orchestrator | 00:01:37.651 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-23 00:01:37.651744 | orchestrator | 00:01:37.651 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-23 00:01:37.651784 | orchestrator | 00:01:37.651 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-23 00:01:37.651825 | orchestrator | 00:01:37.651 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.651859 | orchestrator | 00:01:37.651 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.651870 | orchestrator | 00:01:37.651 STDOUT terraform:  + config_drive = true 2025-11-23 00:01:37.651920 | orchestrator | 00:01:37.651 STDOUT terraform:  + created = (known after apply) 2025-11-23 00:01:37.652065 | orchestrator | 00:01:37.651 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-23 00:01:37.652074 | orchestrator | 00:01:37.651 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-23 00:01:37.652093 | orchestrator | 00:01:37.651 STDOUT terraform:  + force_delete = false 2025-11-23 00:01:37.652103 | orchestrator | 00:01:37.652 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-23 00:01:37.652112 | orchestrator | 00:01:37.652 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.652179 | orchestrator | 00:01:37.652 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.652238 | orchestrator | 00:01:37.652 STDOUT terraform:  + image_name = (known after apply) 2025-11-23 00:01:37.652247 | orchestrator | 00:01:37.652 STDOUT terraform:  + key_pair = "testbed" 2025-11-23 00:01:37.652334 | orchestrator | 00:01:37.652 STDOUT terraform:  + name = "testbed-node-2" 2025-11-23 00:01:37.652343 | orchestrator | 00:01:37.652 STDOUT terraform:  + power_state = "active" 2025-11-23 00:01:37.652418 | orchestrator | 00:01:37.652 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.652427 | orchestrator | 00:01:37.652 STDOUT terraform:  + security_groups = (known after apply) 2025-11-23 00:01:37.652497 | orchestrator | 00:01:37.652 STDOUT terraform:  + stop_before_destroy = false 2025-11-23 00:01:37.652505 | orchestrator | 00:01:37.652 STDOUT terraform:  + updated = (known after apply) 2025-11-23 00:01:37.652561 | orchestrator | 00:01:37.652 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-23 00:01:37.652630 | orchestrator | 00:01:37.652 STDOUT terraform:  + block_device { 2025-11-23 00:01:37.652639 | orchestrator | 00:01:37.652 STDOUT terraform:  + boot_index = 0 2025-11-23 00:01:37.652648 | orchestrator | 00:01:37.652 STDOUT terraform:  + delete_on_termination = false 2025-11-23 00:01:37.652732 | orchestrator | 00:01:37.652 STDOUT terraform:  + destination_type = "volume" 2025-11-23 00:01:37.652743 | orchestrator | 00:01:37.652 STDOUT terraform:  + multiattach = false 2025-11-23 00:01:37.652819 | orchestrator | 00:01:37.652 STDOUT terraform:  + source_type = "volume" 2025-11-23 00:01:37.652829 | orchestrator | 00:01:37.652 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.652839 | orchestrator | 00:01:37.652 STDOUT terraform:  } 2025-11-23 00:01:37.652868 | orchestrator | 00:01:37.652 STDOUT terraform:  + network { 2025-11-23 00:01:37.652882 | orchestrator | 00:01:37.652 STDOUT terraform:  + access_network = false 2025-11-23 00:01:37.652939 | orchestrator | 00:01:37.652 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-23 00:01:37.653137 | orchestrator | 00:01:37.652 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-23 00:01:37.653147 | orchestrator | 00:01:37.652 STDOUT terraform:  + mac = (known after apply) 2025-11-23 00:01:37.653154 | orchestrator | 00:01:37.653 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.653160 | orchestrator | 00:01:37.653 STDOUT terraform:  + port = (known after apply) 2025-11-23 00:01:37.653167 | orchestrator | 00:01:37.653 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.653176 | orchestrator | 00:01:37.653 STDOUT terraform:  } 2025-11-23 00:01:37.653204 | orchestrator | 00:01:37.653 STDOUT terraform:  } 2025-11-23 00:01:37.653267 | orchestrator | 00:01:37.653 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-11-23 00:01:37.653373 | orchestrator | 00:01:37.653 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-23 00:01:37.653408 | orchestrator | 00:01:37.653 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-23 00:01:37.653468 | orchestrator | 00:01:37.653 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-23 00:01:37.653631 | orchestrator | 00:01:37.653 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-23 00:01:37.653652 | orchestrator | 00:01:37.653 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.653660 | orchestrator | 00:01:37.653 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.653667 | orchestrator | 00:01:37.653 STDOUT terraform:  + config_drive = true 2025-11-23 00:01:37.653677 | orchestrator | 00:01:37.653 STDOUT terraform:  + created = (known after apply) 2025-11-23 00:01:37.653707 | orchestrator | 00:01:37.653 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-23 00:01:37.653750 | orchestrator | 00:01:37.653 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-23 00:01:37.653800 | orchestrator | 00:01:37.653 STDOUT terraform:  + force_delete = false 2025-11-23 00:01:37.653904 | orchestrator | 00:01:37.653 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-23 00:01:37.653913 | orchestrator | 00:01:37.653 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.653923 | orchestrator | 00:01:37.653 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.653995 | orchestrator | 00:01:37.653 STDOUT terraform:  + image_name = (known after apply) 2025-11-23 00:01:37.654004 | orchestrator | 00:01:37.653 STDOUT terraform:  + key_pair = "testbed" 2025-11-23 00:01:37.654050 | orchestrator | 00:01:37.653 STDOUT terraform:  + name = "testbed-node-3" 2025-11-23 00:01:37.654106 | orchestrator | 00:01:37.654 STDOUT terraform:  + power_state = "active" 2025-11-23 00:01:37.654173 | orchestrator | 00:01:37.654 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.654181 | orchestrator | 00:01:37.654 STDOUT terraform:  + security_groups = (known after apply) 2025-11-23 00:01:37.654247 | orchestrator | 00:01:37.654 STDOUT terraform:  + stop_before_destroy = false 2025-11-23 00:01:37.654350 | orchestrator | 00:01:37.654 STDOUT terraform:  + updated = (known after apply) 2025-11-23 00:01:37.654360 | orchestrator | 00:01:37.654 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-23 00:01:37.654370 | orchestrator | 00:01:37.654 STDOUT terraform:  + block_device { 2025-11-23 00:01:37.654391 | orchestrator | 00:01:37.654 STDOUT terraform:  + boot_index = 0 2025-11-23 00:01:37.654447 | orchestrator | 00:01:37.654 STDOUT terraform:  + delete_on_termination = false 2025-11-23 00:01:37.654497 | orchestrator | 00:01:37.654 STDOUT terraform:  + destination_type = "volume" 2025-11-23 00:01:37.654508 | orchestrator | 00:01:37.654 STDOUT terraform:  + multiattach = false 2025-11-23 00:01:37.654565 | orchestrator | 00:01:37.654 STDOUT terraform:  + source_type = "volume" 2025-11-23 00:01:37.654640 | orchestrator | 00:01:37.654 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.654649 | orchestrator | 00:01:37.654 STDOUT terraform:  } 2025-11-23 00:01:37.654656 | orchestrator | 00:01:37.654 STDOUT terraform:  + network { 2025-11-23 00:01:37.654666 | orchestrator | 00:01:37.654 STDOUT terraform:  + access_network = false 2025-11-23 00:01:37.654820 | orchestrator | 00:01:37.654 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-23 00:01:37.654830 | orchestrator | 00:01:37.654 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-23 00:01:37.654837 | orchestrator | 00:01:37.654 STDOUT terraform:  + mac = (known after apply) 2025-11-23 00:01:37.654844 | orchestrator | 00:01:37.654 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.654853 | orchestrator | 00:01:37.654 STDOUT terraform:  + port = (known after apply) 2025-11-23 00:01:37.654984 | orchestrator | 00:01:37.654 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.654994 | orchestrator | 00:01:37.654 STDOUT terraform:  } 2025-11-23 00:01:37.655002 | orchestrator | 00:01:37.654 STDOUT terraform:  } 2025-11-23 00:01:37.655008 | orchestrator | 00:01:37.654 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-11-23 00:01:37.655018 | orchestrator | 00:01:37.654 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-23 00:01:37.655090 | orchestrator | 00:01:37.655 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-23 00:01:37.655152 | orchestrator | 00:01:37.655 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-23 00:01:37.655164 | orchestrator | 00:01:37.655 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-23 00:01:37.655245 | orchestrator | 00:01:37.655 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.655326 | orchestrator | 00:01:37.655 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.655336 | orchestrator | 00:01:37.655 STDOUT terraform:  + config_drive = true 2025-11-23 00:01:37.655345 | orchestrator | 00:01:37.655 STDOUT terraform:  + created = (known after apply) 2025-11-23 00:01:37.655407 | orchestrator | 00:01:37.655 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-23 00:01:37.655418 | orchestrator | 00:01:37.655 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-23 00:01:37.655478 | orchestrator | 00:01:37.655 STDOUT terraform:  + force_delete = false 2025-11-23 00:01:37.655525 | orchestrator | 00:01:37.655 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-23 00:01:37.655638 | orchestrator | 00:01:37.655 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.655648 | orchestrator | 00:01:37.655 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.655658 | orchestrator | 00:01:37.655 STDOUT terraform:  + image_name = (known after apply) 2025-11-23 00:01:37.655766 | orchestrator | 00:01:37.655 STDOUT terraform:  + key_pair = "testbed" 2025-11-23 00:01:37.655788 | orchestrator | 00:01:37.655 STDOUT terraform:  + name = "testbed-node-4" 2025-11-23 00:01:37.655795 | orchestrator | 00:01:37.655 STDOUT terraform:  + power_state = "active" 2025-11-23 00:01:37.655871 | orchestrator | 00:01:37.655 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.655881 | orchestrator | 00:01:37.655 STDOUT terraform:  + security_groups = (known after apply) 2025-11-23 00:01:37.655890 | orchestrator | 00:01:37.655 STDOUT terraform:  + stop_before_destroy = false 2025-11-23 00:01:37.655947 | orchestrator | 00:01:37.655 STDOUT terraform:  + updated = (known after apply) 2025-11-23 00:01:37.656048 | orchestrator | 00:01:37.655 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-23 00:01:37.656060 | orchestrator | 00:01:37.656 STDOUT terraform:  + block_device { 2025-11-23 00:01:37.656071 | orchestrator | 00:01:37.656 STDOUT terraform:  + boot_index = 0 2025-11-23 00:01:37.656173 | orchestrator | 00:01:37.656 STDOUT terraform:  + delete_on_termination = false 2025-11-23 00:01:37.656230 | orchestrator | 00:01:37.656 STDOUT terraform:  + destination_type = "volume" 2025-11-23 00:01:37.656247 | orchestrator | 00:01:37.656 STDOUT terraform:  + multiattach = false 2025-11-23 00:01:37.656257 | orchestrator | 00:01:37.656 STDOUT terraform:  + source_type = "volume" 2025-11-23 00:01:37.656312 | orchestrator | 00:01:37.656 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.656321 | orchestrator | 00:01:37.656 STDOUT terraform:  } 2025-11-23 00:01:37.656328 | orchestrator | 00:01:37.656 STDOUT terraform:  + network { 2025-11-23 00:01:37.656337 | orchestrator | 00:01:37.656 STDOUT terraform:  + access_network = false 2025-11-23 00:01:37.656546 | orchestrator | 00:01:37.656 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-23 00:01:37.656558 | orchestrator | 00:01:37.656 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-23 00:01:37.656565 | orchestrator | 00:01:37.656 STDOUT terraform:  + mac = (known after apply) 2025-11-23 00:01:37.656572 | orchestrator | 00:01:37.656 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.656579 | orchestrator | 00:01:37.656 STDOUT terraform:  + port = (known after apply) 2025-11-23 00:01:37.656589 | orchestrator | 00:01:37.656 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.656596 | orchestrator | 00:01:37.656 STDOUT terraform:  } 2025-11-23 00:01:37.656606 | orchestrator | 00:01:37.656 STDOUT terraform:  } 2025-11-23 00:01:37.656788 | orchestrator | 00:01:37.656 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-11-23 00:01:37.656801 | orchestrator | 00:01:37.656 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-23 00:01:37.656808 | orchestrator | 00:01:37.656 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-23 00:01:37.656818 | orchestrator | 00:01:37.656 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-23 00:01:37.656827 | orchestrator | 00:01:37.656 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-23 00:01:37.656882 | orchestrator | 00:01:37.656 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.656893 | orchestrator | 00:01:37.656 STDOUT terraform:  + availability_zone = "nova" 2025-11-23 00:01:37.656935 | orchestrator | 00:01:37.656 STDOUT terraform:  + config_drive = true 2025-11-23 00:01:37.656985 | orchestrator | 00:01:37.656 STDOUT terraform:  + created = (known after apply) 2025-11-23 00:01:37.657048 | orchestrator | 00:01:37.656 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-23 00:01:37.657058 | orchestrator | 00:01:37.657 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-23 00:01:37.657067 | orchestrator | 00:01:37.657 STDOUT terraform:  + force_delete = false 2025-11-23 00:01:37.657162 | orchestrator | 00:01:37.657 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-23 00:01:37.657171 | orchestrator | 00:01:37.657 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.657180 | orchestrator | 00:01:37.657 STDOUT terraform:  + image_id = (known after apply) 2025-11-23 00:01:37.657275 | orchestrator | 00:01:37.657 STDOUT terraform:  + image_name = (known after apply) 2025-11-23 00:01:37.657288 | orchestrator | 00:01:37.657 STDOUT terraform:  + key_pair = "testbed" 2025-11-23 00:01:37.657329 | orchestrator | 00:01:37.657 STDOUT terraform:  + name = "testbed-node-5" 2025-11-23 00:01:37.657381 | orchestrator | 00:01:37.657 STDOUT terraform:  + power_state = "active" 2025-11-23 00:01:37.657392 | orchestrator | 00:01:37.657 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.657466 | orchestrator | 00:01:37.657 STDOUT terraform:  + security_groups = (known after apply) 2025-11-23 00:01:37.657474 | orchestrator | 00:01:37.657 STDOUT terraform:  + stop_before_destroy = false 2025-11-23 00:01:37.657511 | orchestrator | 00:01:37.657 STDOUT terraform:  + updated = (known after apply) 2025-11-23 00:01:37.657573 | orchestrator | 00:01:37.657 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-23 00:01:37.657585 | orchestrator | 00:01:37.657 STDOUT terraform:  + block_device { 2025-11-23 00:01:37.657729 | orchestrator | 00:01:37.657 STDOUT terraform:  + boot_index = 0 2025-11-23 00:01:37.657742 | orchestrator | 00:01:37.657 STDOUT terraform:  + delete_on_termination = false 2025-11-23 00:01:37.657750 | orchestrator | 00:01:37.657 STDOUT terraform:  + destination_type = "volume" 2025-11-23 00:01:37.657757 | orchestrator | 00:01:37.657 STDOUT terraform:  + multiattach = false 2025-11-23 00:01:37.657763 | orchestrator | 00:01:37.657 STDOUT terraform:  + source_type = "volume" 2025-11-23 00:01:37.657773 | orchestrator | 00:01:37.657 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.657781 | orchestrator | 00:01:37.657 STDOUT terraform:  } 2025-11-23 00:01:37.657790 | orchestrator | 00:01:37.657 STDOUT terraform:  + network { 2025-11-23 00:01:37.657867 | orchestrator | 00:01:37.657 STDOUT terraform:  + access_network = false 2025-11-23 00:01:37.657877 | orchestrator | 00:01:37.657 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-23 00:01:37.657886 | orchestrator | 00:01:37.657 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-23 00:01:37.657906 | orchestrator | 00:01:37.657 STDOUT terraform:  + mac = (known after apply) 2025-11-23 00:01:37.657971 | orchestrator | 00:01:37.657 STDOUT terraform:  + name = (known after apply) 2025-11-23 00:01:37.657982 | orchestrator | 00:01:37.657 STDOUT terraform:  + port = (known after apply) 2025-11-23 00:01:37.658083 | orchestrator | 00:01:37.657 STDOUT terraform:  + uuid = (known after apply) 2025-11-23 00:01:37.658095 | orchestrator | 00:01:37.658 STDOUT terraform:  } 2025-11-23 00:01:37.658102 | orchestrator | 00:01:37.658 STDOUT terraform:  } 2025-11-23 00:01:37.658111 | orchestrator | 00:01:37.658 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-11-23 00:01:37.658121 | orchestrator | 00:01:37.658 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-11-23 00:01:37.658219 | orchestrator | 00:01:37.658 STDOUT terraform:  + fingerprint = (known after apply) 2025-11-23 00:01:37.658229 | orchestrator | 00:01:37.658 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.658237 | orchestrator | 00:01:37.658 STDOUT terraform:  + name = "testbed" 2025-11-23 00:01:37.658246 | orchestrator | 00:01:37.658 STDOUT terraform:  + private_key = (sensitive value) 2025-11-23 00:01:37.658299 | orchestrator | 00:01:37.658 STDOUT terraform:  + public_key = (known after apply) 2025-11-23 00:01:37.658311 | orchestrator | 00:01:37.658 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.658383 | orchestrator | 00:01:37.658 STDOUT terraform:  + user_id = (known after apply) 2025-11-23 00:01:37.658394 | orchestrator | 00:01:37.658 STDOUT terraform:  } 2025-11-23 00:01:37.658404 | orchestrator | 00:01:37.658 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-11-23 00:01:37.658589 | orchestrator | 00:01:37.658 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.658602 | orchestrator | 00:01:37.658 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.658610 | orchestrator | 00:01:37.658 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.658618 | orchestrator | 00:01:37.658 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.658625 | orchestrator | 00:01:37.658 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.658635 | orchestrator | 00:01:37.658 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.658643 | orchestrator | 00:01:37.658 STDOUT terraform:  } 2025-11-23 00:01:37.658712 | orchestrator | 00:01:37.658 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-11-23 00:01:37.658748 | orchestrator | 00:01:37.658 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.658795 | orchestrator | 00:01:37.658 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.658805 | orchestrator | 00:01:37.658 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.658993 | orchestrator | 00:01:37.658 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.659005 | orchestrator | 00:01:37.658 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.659021 | orchestrator | 00:01:37.658 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.659028 | orchestrator | 00:01:37.658 STDOUT terraform:  } 2025-11-23 00:01:37.659035 | orchestrator | 00:01:37.658 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-11-23 00:01:37.659045 | orchestrator | 00:01:37.658 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.659054 | orchestrator | 00:01:37.659 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.659098 | orchestrator | 00:01:37.659 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.659250 | orchestrator | 00:01:37.659 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.659259 | orchestrator | 00:01:37.659 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.659266 | orchestrator | 00:01:37.659 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.659272 | orchestrator | 00:01:37.659 STDOUT terraform:  } 2025-11-23 00:01:37.659281 | orchestrator | 00:01:37.659 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-11-23 00:01:37.659405 | orchestrator | 00:01:37.659 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.659416 | orchestrator | 00:01:37.659 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.659422 | orchestrator | 00:01:37.659 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.659432 | orchestrator | 00:01:37.659 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.659499 | orchestrator | 00:01:37.659 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.659509 | orchestrator | 00:01:37.659 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.659516 | orchestrator | 00:01:37.659 STDOUT terraform:  } 2025-11-23 00:01:37.659565 | orchestrator | 00:01:37.659 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-11-23 00:01:37.664909 | orchestrator | 00:01:37.659 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.664960 | orchestrator | 00:01:37.664 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.664969 | orchestrator | 00:01:37.664 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.665033 | orchestrator | 00:01:37.664 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.665043 | orchestrator | 00:01:37.664 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.665053 | orchestrator | 00:01:37.665 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.665119 | orchestrator | 00:01:37.665 STDOUT terraform:  } 2025-11-23 00:01:37.665171 | orchestrator | 00:01:37.665 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-11-23 00:01:37.665258 | orchestrator | 00:01:37.665 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.665270 | orchestrator | 00:01:37.665 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.665413 | orchestrator | 00:01:37.665 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.665426 | orchestrator | 00:01:37.665 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.665442 | orchestrator | 00:01:37.665 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.665449 | orchestrator | 00:01:37.665 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.665456 | orchestrator | 00:01:37.665 STDOUT terraform:  } 2025-11-23 00:01:37.665466 | orchestrator | 00:01:37.665 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-11-23 00:01:37.665522 | orchestrator | 00:01:37.665 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.665654 | orchestrator | 00:01:37.665 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.665665 | orchestrator | 00:01:37.665 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.665672 | orchestrator | 00:01:37.665 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.665679 | orchestrator | 00:01:37.665 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.665689 | orchestrator | 00:01:37.665 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.665696 | orchestrator | 00:01:37.665 STDOUT terraform:  } 2025-11-23 00:01:37.665745 | orchestrator | 00:01:37.665 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-11-23 00:01:37.665757 | orchestrator | 00:01:37.665 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.665819 | orchestrator | 00:01:37.665 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.665832 | orchestrator | 00:01:37.665 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.665874 | orchestrator | 00:01:37.665 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.665886 | orchestrator | 00:01:37.665 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.665924 | orchestrator | 00:01:37.665 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.665933 | orchestrator | 00:01:37.665 STDOUT terraform:  } 2025-11-23 00:01:37.665997 | orchestrator | 00:01:37.665 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-11-23 00:01:37.666119 | orchestrator | 00:01:37.665 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-23 00:01:37.666131 | orchestrator | 00:01:37.666 STDOUT terraform:  + device = (known after apply) 2025-11-23 00:01:37.666138 | orchestrator | 00:01:37.666 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.666148 | orchestrator | 00:01:37.666 STDOUT terraform:  + instance_id = (known after apply) 2025-11-23 00:01:37.666157 | orchestrator | 00:01:37.666 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.666240 | orchestrator | 00:01:37.666 STDOUT terraform:  + volume_id = (known after apply) 2025-11-23 00:01:37.666317 | orchestrator | 00:01:37.666 STDOUT terraform:  } 2025-11-23 00:01:37.666339 | orchestrator | 00:01:37.666 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-11-23 00:01:37.666412 | orchestrator | 00:01:37.666 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-11-23 00:01:37.666420 | orchestrator | 00:01:37.666 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-23 00:01:37.666428 | orchestrator | 00:01:37.666 STDOUT terraform:  + floating_ip = (known after apply) 2025-11-23 00:01:37.666479 | orchestrator | 00:01:37.666 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.666503 | orchestrator | 00:01:37.666 STDOUT terraform:  + port_id = (known after apply) 2025-11-23 00:01:37.666578 | orchestrator | 00:01:37.666 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.666589 | orchestrator | 00:01:37.666 STDOUT terraform:  } 2025-11-23 00:01:37.666598 | orchestrator | 00:01:37.666 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-11-23 00:01:37.666648 | orchestrator | 00:01:37.666 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-11-23 00:01:37.666676 | orchestrator | 00:01:37.666 STDOUT terraform:  + address = (known after apply) 2025-11-23 00:01:37.666690 | orchestrator | 00:01:37.666 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.666770 | orchestrator | 00:01:37.666 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-23 00:01:37.666781 | orchestrator | 00:01:37.666 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.666789 | orchestrator | 00:01:37.666 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-23 00:01:37.666879 | orchestrator | 00:01:37.666 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.666899 | orchestrator | 00:01:37.666 STDOUT terraform:  + pool = "public" 2025-11-23 00:01:37.666907 | orchestrator | 00:01:37.666 STDOUT terraform:  + port_id = (known after apply) 2025-11-23 00:01:37.666917 | orchestrator | 00:01:37.666 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.666925 | orchestrator | 00:01:37.666 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.666974 | orchestrator | 00:01:37.666 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.666984 | orchestrator | 00:01:37.666 STDOUT terraform:  } 2025-11-23 00:01:37.667050 | orchestrator | 00:01:37.666 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-11-23 00:01:37.667064 | orchestrator | 00:01:37.667 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-11-23 00:01:37.667131 | orchestrator | 00:01:37.667 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.667143 | orchestrator | 00:01:37.667 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.667267 | orchestrator | 00:01:37.667 STDOUT terraform:  + availability_zone_hints = [ 2025-11-23 00:01:37.667282 | orchestrator | 00:01:37.667 STDOUT terraform:  + "nova", 2025-11-23 00:01:37.667289 | orchestrator | 00:01:37.667 STDOUT terraform:  ] 2025-11-23 00:01:37.667297 | orchestrator | 00:01:37.667 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-23 00:01:37.667319 | orchestrator | 00:01:37.667 STDOUT terraform:  + external = (known after apply) 2025-11-23 00:01:37.667376 | orchestrator | 00:01:37.667 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.667385 | orchestrator | 00:01:37.667 STDOUT terraform:  + mtu = (known after apply) 2025-11-23 00:01:37.667444 | orchestrator | 00:01:37.667 STDOUT terraform:  + name = "net-testbed-management" 2025-11-23 00:01:37.667456 | orchestrator | 00:01:37.667 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.667559 | orchestrator | 00:01:37.667 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.667569 | orchestrator | 00:01:37.667 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.667578 | orchestrator | 00:01:37.667 STDOUT terraform:  + shared = (known after apply) 2025-11-23 00:01:37.667624 | orchestrator | 00:01:37.667 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.667717 | orchestrator | 00:01:37.667 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-11-23 00:01:37.667729 | orchestrator | 00:01:37.667 STDOUT terraform:  + segments (known after apply) 2025-11-23 00:01:37.667737 | orchestrator | 00:01:37.667 STDOUT terraform:  } 2025-11-23 00:01:37.667747 | orchestrator | 00:01:37.667 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-11-23 00:01:37.667818 | orchestrator | 00:01:37.667 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-11-23 00:01:37.667831 | orchestrator | 00:01:37.667 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.667878 | orchestrator | 00:01:37.667 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-23 00:01:37.667889 | orchestrator | 00:01:37.667 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-23 00:01:37.667996 | orchestrator | 00:01:37.667 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.668008 | orchestrator | 00:01:37.667 STDOUT terraform:  + device_id = (known after apply) 2025-11-23 00:01:37.668019 | orchestrator | 00:01:37.667 STDOUT terraform:  + device_owner = (known after apply) 2025-11-23 00:01:37.668081 | orchestrator | 00:01:37.668 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-23 00:01:37.668141 | orchestrator | 00:01:37.668 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.668153 | orchestrator | 00:01:37.668 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.668238 | orchestrator | 00:01:37.668 STDOUT terraform:  + mac_address = (known after apply) 2025-11-23 00:01:37.668353 | orchestrator | 00:01:37.668 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.668423 | orchestrator | 00:01:37.668 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.668432 | orchestrator | 00:01:37.668 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.668497 | orchestrator | 00:01:37.668 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.668522 | orchestrator | 00:01:37.668 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-23 00:01:37.668531 | orchestrator | 00:01:37.668 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.668640 | orchestrator | 00:01:37.668 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.668652 | orchestrator | 00:01:37.668 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-23 00:01:37.668660 | orchestrator | 00:01:37.668 STDOUT terraform:  } 2025-11-23 00:01:37.668669 | orchestrator | 00:01:37.668 STDOUT terraform:  + binding (known after apply) 2025-11-23 00:01:37.668676 | orchestrator | 00:01:37.668 STDOUT terraform:  + fixed_ip { 2025-11-23 00:01:37.668685 | orchestrator | 00:01:37.668 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-11-23 00:01:37.668724 | orchestrator | 00:01:37.668 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.668736 | orchestrator | 00:01:37.668 STDOUT terraform:  } 2025-11-23 00:01:37.668743 | orchestrator | 00:01:37.668 STDOUT terraform:  } 2025-11-23 00:01:37.668822 | orchestrator | 00:01:37.668 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-11-23 00:01:37.668937 | orchestrator | 00:01:37.668 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-23 00:01:37.668950 | orchestrator | 00:01:37.668 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.668957 | orchestrator | 00:01:37.668 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-23 00:01:37.668966 | orchestrator | 00:01:37.668 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-23 00:01:37.669005 | orchestrator | 00:01:37.668 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.669053 | orchestrator | 00:01:37.668 STDOUT terraform:  + device_id = (known after apply) 2025-11-23 00:01:37.669065 | orchestrator | 00:01:37.669 STDOUT terraform:  + device_owner = (known after apply) 2025-11-23 00:01:37.669144 | orchestrator | 00:01:37.669 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-23 00:01:37.669158 | orchestrator | 00:01:37.669 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.669270 | orchestrator | 00:01:37.669 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.669282 | orchestrator | 00:01:37.669 STDOUT terraform:  + mac_address = (known after apply) 2025-11-23 00:01:37.669292 | orchestrator | 00:01:37.669 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.669373 | orchestrator | 00:01:37.669 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.669381 | orchestrator | 00:01:37.669 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.669391 | orchestrator | 00:01:37.669 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.669454 | orchestrator | 00:01:37.669 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-23 00:01:37.669497 | orchestrator | 00:01:37.669 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.669507 | orchestrator | 00:01:37.669 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.669529 | orchestrator | 00:01:37.669 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-23 00:01:37.669537 | orchestrator | 00:01:37.669 STDOUT terraform:  } 2025-11-23 00:01:37.669546 | orchestrator | 00:01:37.669 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.669608 | orchestrator | 00:01:37.669 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-23 00:01:37.669617 | orchestrator | 00:01:37.669 STDOUT terraform:  } 2025-11-23 00:01:37.669627 | orchestrator | 00:01:37.669 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.669651 | orchestrator | 00:01:37.669 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-23 00:01:37.669664 | orchestrator | 00:01:37.669 STDOUT terraform:  } 2025-11-23 00:01:37.669673 | orchestrator | 00:01:37.669 STDOUT terraform:  + binding (known after apply) 2025-11-23 00:01:37.669682 | orchestrator | 00:01:37.669 STDOUT terraform:  + fixed_ip { 2025-11-23 00:01:37.669721 | orchestrator | 00:01:37.669 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-11-23 00:01:37.669733 | orchestrator | 00:01:37.669 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.669743 | orchestrator | 00:01:37.669 STDOUT terraform:  } 2025-11-23 00:01:37.669780 | orchestrator | 00:01:37.669 STDOUT terraform:  } 2025-11-23 00:01:37.669850 | orchestrator | 00:01:37.669 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-11-23 00:01:37.669862 | orchestrator | 00:01:37.669 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-23 00:01:37.669957 | orchestrator | 00:01:37.669 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.669966 | orchestrator | 00:01:37.669 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-23 00:01:37.669995 | orchestrator | 00:01:37.669 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-23 00:01:37.670044 | orchestrator | 00:01:37.669 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.670107 | orchestrator | 00:01:37.670 STDOUT terraform:  + device_id = (known after apply) 2025-11-23 00:01:37.670117 | orchestrator | 00:01:37.670 STDOUT terraform:  + device_owner = (known after apply) 2025-11-23 00:01:37.670171 | orchestrator | 00:01:37.670 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-23 00:01:37.670282 | orchestrator | 00:01:37.670 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.670294 | orchestrator | 00:01:37.670 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.670337 | orchestrator | 00:01:37.670 STDOUT terraform:  + mac_address = (known after apply) 2025-11-23 00:01:37.670352 | orchestrator | 00:01:37.670 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.670497 | orchestrator | 00:01:37.670 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.670506 | orchestrator | 00:01:37.670 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.670513 | orchestrator | 00:01:37.670 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.670534 | orchestrator | 00:01:37.670 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-23 00:01:37.670540 | orchestrator | 00:01:37.670 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.670765 | orchestrator | 00:01:37.670 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.670779 | orchestrator | 00:01:37.670 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-23 00:01:37.670787 | orchestrator | 00:01:37.670 STDOUT terraform:  } 2025-11-23 00:01:37.670806 | orchestrator | 00:01:37.670 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.670813 | orchestrator | 00:01:37.670 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-23 00:01:37.670820 | orchestrator | 00:01:37.670 STDOUT terraform:  } 2025-11-23 00:01:37.670827 | orchestrator | 00:01:37.670 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.670837 | orchestrator | 00:01:37.670 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-23 00:01:37.670844 | orchestrator | 00:01:37.670 STDOUT terraform:  } 2025-11-23 00:01:37.670850 | orchestrator | 00:01:37.670 STDOUT terraform:  + binding (known after apply) 2025-11-23 00:01:37.670856 | orchestrator | 00:01:37.670 STDOUT terraform:  + fixed_ip { 2025-11-23 00:01:37.670863 | orchestrator | 00:01:37.670 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-11-23 00:01:37.670871 | orchestrator | 00:01:37.670 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.670949 | orchestrator | 00:01:37.670 STDOUT terraform:  } 2025-11-23 00:01:37.670960 | orchestrator | 00:01:37.670 STDOUT terraform:  } 2025-11-23 00:01:37.670967 | orchestrator | 00:01:37.670 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-11-23 00:01:37.671534 | orchestrator | 00:01:37.670 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-23 00:01:37.671563 | orchestrator | 00:01:37.670 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.671572 | orchestrator | 00:01:37.671 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-23 00:01:37.671578 | orchestrator | 00:01:37.671 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-23 00:01:37.671584 | orchestrator | 00:01:37.671 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.671591 | orchestrator | 00:01:37.671 STDOUT terraform:  + device_id = (known after apply) 2025-11-23 00:01:37.671598 | orchestrator | 00:01:37.671 STDOUT terraform:  + device_owner = (known after apply) 2025-11-23 00:01:37.671604 | orchestrator | 00:01:37.671 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-23 00:01:37.671611 | orchestrator | 00:01:37.671 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.671618 | orchestrator | 00:01:37.671 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.671626 | orchestrator | 00:01:37.671 STDOUT terraform:  + mac_address = (known after apply) 2025-11-23 00:01:37.671633 | orchestrator | 00:01:37.671 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.671651 | orchestrator | 00:01:37.671 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.671658 | orchestrator | 00:01:37.671 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.671664 | orchestrator | 00:01:37.671 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.671676 | orchestrator | 00:01:37.671 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-23 00:01:37.671683 | orchestrator | 00:01:37.671 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.671691 | orchestrator | 00:01:37.671 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.671697 | orchestrator | 00:01:37.671 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-23 00:01:37.671704 | orchestrator | 00:01:37.671 STDOUT terraform:  } 2025-11-23 00:01:37.671711 | orchestrator | 00:01:37.671 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.671717 | orchestrator | 00:01:37.671 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-23 00:01:37.671726 | orchestrator | 00:01:37.671 STDOUT terraform:  } 2025-11-23 00:01:37.671733 | orchestrator | 00:01:37.671 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.671923 | orchestrator | 00:01:37.671 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-23 00:01:37.671939 | orchestrator | 00:01:37.671 STDOUT terraform:  } 2025-11-23 00:01:37.671954 | orchestrator | 00:01:37.671 STDOUT terraform:  + binding (known after apply) 2025-11-23 00:01:37.671961 | orchestrator | 00:01:37.671 STDOUT terraform:  + fixed_ip { 2025-11-23 00:01:37.671967 | orchestrator | 00:01:37.671 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-11-23 00:01:37.671974 | orchestrator | 00:01:37.671 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.671981 | orchestrator | 00:01:37.671 STDOUT terraform:  } 2025-11-23 00:01:37.671987 | orchestrator | 00:01:37.671 STDOUT terraform:  } 2025-11-23 00:01:37.671994 | orchestrator | 00:01:37.671 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-11-23 00:01:37.672004 | orchestrator | 00:01:37.671 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-23 00:01:37.672012 | orchestrator | 00:01:37.671 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.672021 | orchestrator | 00:01:37.671 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-23 00:01:37.672160 | orchestrator | 00:01:37.672 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-23 00:01:37.672170 | orchestrator | 00:01:37.672 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.672177 | orchestrator | 00:01:37.672 STDOUT terraform:  + device_id = (known after apply) 2025-11-23 00:01:37.672206 | orchestrator | 00:01:37.672 STDOUT terraform:  + device_owner = (known after apply) 2025-11-23 00:01:37.672286 | orchestrator | 00:01:37.672 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-23 00:01:37.672314 | orchestrator | 00:01:37.672 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.672374 | orchestrator | 00:01:37.672 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.672397 | orchestrator | 00:01:37.672 STDOUT terraform:  + mac_address = (known after apply) 2025-11-23 00:01:37.672457 | orchestrator | 00:01:37.672 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.672469 | orchestrator | 00:01:37.672 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.672512 | orchestrator | 00:01:37.672 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.672595 | orchestrator | 00:01:37.672 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.672606 | orchestrator | 00:01:37.672 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-23 00:01:37.672645 | orchestrator | 00:01:37.672 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.672735 | orchestrator | 00:01:37.672 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.672746 | orchestrator | 00:01:37.672 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-23 00:01:37.672794 | orchestrator | 00:01:37.672 STDOUT terraform:  } 2025-11-23 00:01:37.672804 | orchestrator | 00:01:37.672 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.672813 | orchestrator | 00:01:37.672 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-23 00:01:37.672821 | orchestrator | 00:01:37.672 STDOUT terraform:  } 2025-11-23 00:01:37.672829 | orchestrator | 00:01:37.672 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.673017 | orchestrator | 00:01:37.672 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-23 00:01:37.673032 | orchestrator | 00:01:37.672 STDOUT terraform:  } 2025-11-23 00:01:37.673039 | orchestrator | 00:01:37.672 STDOUT terraform:  + binding (known after apply) 2025-11-23 00:01:37.673045 | orchestrator | 00:01:37.672 STDOUT terraform:  + fixed_ip { 2025-11-23 00:01:37.673051 | orchestrator | 00:01:37.672 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-11-23 00:01:37.673058 | orchestrator | 00:01:37.672 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.673068 | orchestrator | 00:01:37.673 STDOUT terraform:  } 2025-11-23 00:01:37.673075 | orchestrator | 00:01:37.673 STDOUT terraform:  } 2025-11-23 00:01:37.673203 | orchestrator | 00:01:37.673 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-11-23 00:01:37.673216 | orchestrator | 00:01:37.673 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-23 00:01:37.673274 | orchestrator | 00:01:37.673 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.673285 | orchestrator | 00:01:37.673 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-23 00:01:37.673335 | orchestrator | 00:01:37.673 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-23 00:01:37.673421 | orchestrator | 00:01:37.673 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.673433 | orchestrator | 00:01:37.673 STDOUT terraform:  + device_id = (known after apply) 2025-11-23 00:01:37.673442 | orchestrator | 00:01:37.673 STDOUT terraform:  + device_owner = (known after apply) 2025-11-23 00:01:37.673482 | orchestrator | 00:01:37.673 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-23 00:01:37.674098 | orchestrator | 00:01:37.673 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.674140 | orchestrator | 00:01:37.673 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.674146 | orchestrator | 00:01:37.673 STDOUT terraform:  + mac_address = (known after apply) 2025-11-23 00:01:37.674156 | orchestrator | 00:01:37.673 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.674161 | orchestrator | 00:01:37.673 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.674165 | orchestrator | 00:01:37.673 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.674174 | orchestrator | 00:01:37.673 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.674179 | orchestrator | 00:01:37.674 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-23 00:01:37.674244 | orchestrator | 00:01:37.674 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.674286 | orchestrator | 00:01:37.674 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.674338 | orchestrator | 00:01:37.674 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-23 00:01:37.674366 | orchestrator | 00:01:37.674 STDOUT terraform:  } 2025-11-23 00:01:37.674408 | orchestrator | 00:01:37.674 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.674446 | orchestrator | 00:01:37.674 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-23 00:01:37.674471 | orchestrator | 00:01:37.674 STDOUT terraform:  } 2025-11-23 00:01:37.674507 | orchestrator | 00:01:37.674 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.674578 | orchestrator | 00:01:37.674 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-23 00:01:37.674584 | orchestrator | 00:01:37.674 STDOUT terraform:  } 2025-11-23 00:01:37.674626 | orchestrator | 00:01:37.674 STDOUT terraform:  + binding (known after apply) 2025-11-23 00:01:37.674651 | orchestrator | 00:01:37.674 STDOUT terraform:  + fixed_ip { 2025-11-23 00:01:37.674700 | orchestrator | 00:01:37.674 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-11-23 00:01:37.674757 | orchestrator | 00:01:37.674 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.674781 | orchestrator | 00:01:37.674 STDOUT terraform:  } 2025-11-23 00:01:37.674804 | orchestrator | 00:01:37.674 STDOUT terraform:  } 2025-11-23 00:01:37.674920 | orchestrator | 00:01:37.674 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-11-23 00:01:37.674984 | orchestrator | 00:01:37.674 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-23 00:01:37.675122 | orchestrator | 00:01:37.674 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.675248 | orchestrator | 00:01:37.675 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-23 00:01:37.675292 | orchestrator | 00:01:37.675 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-23 00:01:37.675363 | orchestrator | 00:01:37.675 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.675426 | orchestrator | 00:01:37.675 STDOUT terraform:  + device_id = (known after apply) 2025-11-23 00:01:37.675491 | orchestrator | 00:01:37.675 STDOUT terraform:  + device_owner = (known after apply) 2025-11-23 00:01:37.675557 | orchestrator | 00:01:37.675 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-23 00:01:37.675622 | orchestrator | 00:01:37.675 STDOUT terraform:  + dns_name = (known after apply) 2025-11-23 00:01:37.675697 | orchestrator | 00:01:37.675 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.675765 | orchestrator | 00:01:37.675 STDOUT terraform:  + mac_address = (known after apply) 2025-11-23 00:01:37.675832 | orchestrator | 00:01:37.675 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.675916 | orchestrator | 00:01:37.675 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-23 00:01:37.675974 | orchestrator | 00:01:37.675 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-23 00:01:37.676044 | orchestrator | 00:01:37.675 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.676109 | orchestrator | 00:01:37.676 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-23 00:01:37.676180 | orchestrator | 00:01:37.676 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.676252 | orchestrator | 00:01:37.676 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.676300 | orchestrator | 00:01:37.676 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-23 00:01:37.676326 | orchestrator | 00:01:37.676 STDOUT terraform:  } 2025-11-23 00:01:37.676361 | orchestrator | 00:01:37.676 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.676421 | orchestrator | 00:01:37.676 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-23 00:01:37.676445 | orchestrator | 00:01:37.676 STDOUT terraform:  } 2025-11-23 00:01:37.676479 | orchestrator | 00:01:37.676 STDOUT terraform:  + allowed_address_pairs { 2025-11-23 00:01:37.676530 | orchestrator | 00:01:37.676 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-23 00:01:37.676558 | orchestrator | 00:01:37.676 STDOUT terraform:  } 2025-11-23 00:01:37.676599 | orchestrator | 00:01:37.676 STDOUT terraform:  + binding (known after apply) 2025-11-23 00:01:37.676623 | orchestrator | 00:01:37.676 STDOUT terraform:  + fixed_ip { 2025-11-23 00:01:37.676662 | orchestrator | 00:01:37.676 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-11-23 00:01:37.676714 | orchestrator | 00:01:37.676 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.676754 | orchestrator | 00:01:37.676 STDOUT terraform:  } 2025-11-23 00:01:37.676761 | orchestrator | 00:01:37.676 STDOUT terraform:  } 2025-11-23 00:01:37.676852 | orchestrator | 00:01:37.676 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-11-23 00:01:37.676943 | orchestrator | 00:01:37.676 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-11-23 00:01:37.676982 | orchestrator | 00:01:37.676 STDOUT terraform:  + force_destroy = false 2025-11-23 00:01:37.677039 | orchestrator | 00:01:37.676 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.677107 | orchestrator | 00:01:37.677 STDOUT terraform:  + port_id = (known after apply) 2025-11-23 00:01:37.677176 | orchestrator | 00:01:37.677 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.677258 | orchestrator | 00:01:37.677 STDOUT terraform:  + router_id = (known after apply) 2025-11-23 00:01:37.677322 | orchestrator | 00:01:37.677 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-23 00:01:37.677347 | orchestrator | 00:01:37.677 STDOUT terraform:  } 2025-11-23 00:01:37.677422 | orchestrator | 00:01:37.677 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-11-23 00:01:37.677476 | orchestrator | 00:01:37.677 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-11-23 00:01:37.677544 | orchestrator | 00:01:37.677 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-23 00:01:37.677614 | orchestrator | 00:01:37.677 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.677656 | orchestrator | 00:01:37.677 STDOUT terraform:  + availability_zone_hints = [ 2025-11-23 00:01:37.677682 | orchestrator | 00:01:37.677 STDOUT terraform:  + "nova", 2025-11-23 00:01:37.677710 | orchestrator | 00:01:37.677 STDOUT terraform:  ] 2025-11-23 00:01:37.677780 | orchestrator | 00:01:37.677 STDOUT terraform:  + distributed = (known after apply) 2025-11-23 00:01:37.677847 | orchestrator | 00:01:37.677 STDOUT terraform:  + enable_snat = (known after apply) 2025-11-23 00:01:37.677938 | orchestrator | 00:01:37.677 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-11-23 00:01:37.678000 | orchestrator | 00:01:37.677 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-11-23 00:01:37.686213 | orchestrator | 00:01:37.677 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.686326 | orchestrator | 00:01:37.686 STDOUT terraform:  + name = "testbed" 2025-11-23 00:01:37.686399 | orchestrator | 00:01:37.686 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.686467 | orchestrator | 00:01:37.686 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.686574 | orchestrator | 00:01:37.686 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-11-23 00:01:37.689523 | orchestrator | 00:01:37.686 STDOUT terraform:  } 2025-11-23 00:01:37.689548 | orchestrator | 00:01:37.686 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-11-23 00:01:37.689553 | orchestrator | 00:01:37.686 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-11-23 00:01:37.689558 | orchestrator | 00:01:37.686 STDOUT terraform:  + description = "ssh" 2025-11-23 00:01:37.689562 | orchestrator | 00:01:37.686 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.689566 | orchestrator | 00:01:37.686 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.689570 | orchestrator | 00:01:37.686 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.689586 | orchestrator | 00:01:37.687 STDOUT terraform:  + port_range_max = 22 2025-11-23 00:01:37.689590 | orchestrator | 00:01:37.687 STDOUT terraform:  + port_range_min = 22 2025-11-23 00:01:37.689594 | orchestrator | 00:01:37.687 STDOUT terraform:  + protocol = "tcp" 2025-11-23 00:01:37.689597 | orchestrator | 00:01:37.687 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.689602 | orchestrator | 00:01:37.687 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.689605 | orchestrator | 00:01:37.687 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.689609 | orchestrator | 00:01:37.687 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-23 00:01:37.689613 | orchestrator | 00:01:37.687 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.689617 | orchestrator | 00:01:37.687 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.689624 | orchestrator | 00:01:37.687 STDOUT terraform:  } 2025-11-23 00:01:37.689628 | orchestrator | 00:01:37.687 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-11-23 00:01:37.689632 | orchestrator | 00:01:37.687 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-11-23 00:01:37.689635 | orchestrator | 00:01:37.687 STDOUT terraform:  + description = "wireguard" 2025-11-23 00:01:37.689639 | orchestrator | 00:01:37.687 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.689643 | orchestrator | 00:01:37.687 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.689647 | orchestrator | 00:01:37.687 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.689651 | orchestrator | 00:01:37.687 STDOUT terraform:  + port_range_max = 51820 2025-11-23 00:01:37.689655 | orchestrator | 00:01:37.688 STDOUT terraform:  + port_range_min = 51820 2025-11-23 00:01:37.689659 | orchestrator | 00:01:37.688 STDOUT terraform:  + protocol = "udp" 2025-11-23 00:01:37.689663 | orchestrator | 00:01:37.688 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.689666 | orchestrator | 00:01:37.688 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.689670 | orchestrator | 00:01:37.688 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.689674 | orchestrator | 00:01:37.688 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-23 00:01:37.689682 | orchestrator | 00:01:37.688 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.689685 | orchestrator | 00:01:37.688 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.689689 | orchestrator | 00:01:37.688 STDOUT terraform:  } 2025-11-23 00:01:37.689693 | orchestrator | 00:01:37.688 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-11-23 00:01:37.689697 | orchestrator | 00:01:37.688 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-11-23 00:01:37.689709 | orchestrator | 00:01:37.688 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.689716 | orchestrator | 00:01:37.688 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.689727 | orchestrator | 00:01:37.688 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.689731 | orchestrator | 00:01:37.688 STDOUT terraform:  + protocol = "tcp" 2025-11-23 00:01:37.689735 | orchestrator | 00:01:37.688 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.689738 | orchestrator | 00:01:37.688 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.689742 | orchestrator | 00:01:37.689 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.689746 | orchestrator | 00:01:37.689 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-23 00:01:37.689750 | orchestrator | 00:01:37.689 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.689754 | orchestrator | 00:01:37.689 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.689757 | orchestrator | 00:01:37.689 STDOUT terraform:  } 2025-11-23 00:01:37.689761 | orchestrator | 00:01:37.689 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-11-23 00:01:37.689765 | orchestrator | 00:01:37.689 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-11-23 00:01:37.689769 | orchestrator | 00:01:37.689 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.689772 | orchestrator | 00:01:37.689 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.689776 | orchestrator | 00:01:37.689 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.689782 | orchestrator | 00:01:37.689 STDOUT terraform:  + protocol = "udp" 2025-11-23 00:01:37.689873 | orchestrator | 00:01:37.689 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.689880 | orchestrator | 00:01:37.689 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.689965 | orchestrator | 00:01:37.689 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.690464 | orchestrator | 00:01:37.689 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-23 00:01:37.693489 | orchestrator | 00:01:37.690 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.694056 | orchestrator | 00:01:37.693 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.694095 | orchestrator | 00:01:37.694 STDOUT terraform:  } 2025-11-23 00:01:37.694157 | orchestrator | 00:01:37.694 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-11-23 00:01:37.694228 | orchestrator | 00:01:37.694 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-11-23 00:01:37.694405 | orchestrator | 00:01:37.694 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.694485 | orchestrator | 00:01:37.694 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.694499 | orchestrator | 00:01:37.694 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.694529 | orchestrator | 00:01:37.694 STDOUT terraform:  + protocol = "icmp" 2025-11-23 00:01:37.694549 | orchestrator | 00:01:37.694 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.694560 | orchestrator | 00:01:37.694 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.694570 | orchestrator | 00:01:37.694 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.694580 | orchestrator | 00:01:37.694 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-23 00:01:37.694590 | orchestrator | 00:01:37.694 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.694600 | orchestrator | 00:01:37.694 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.694613 | orchestrator | 00:01:37.694 STDOUT terraform:  } 2025-11-23 00:01:37.694625 | orchestrator | 00:01:37.694 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-11-23 00:01:37.694683 | orchestrator | 00:01:37.694 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-11-23 00:01:37.694695 | orchestrator | 00:01:37.694 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.694709 | orchestrator | 00:01:37.694 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.694777 | orchestrator | 00:01:37.694 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.694790 | orchestrator | 00:01:37.694 STDOUT terraform:  + protocol = "tcp" 2025-11-23 00:01:37.694804 | orchestrator | 00:01:37.694 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.694867 | orchestrator | 00:01:37.694 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.694880 | orchestrator | 00:01:37.694 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.694893 | orchestrator | 00:01:37.694 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-23 00:01:37.694964 | orchestrator | 00:01:37.694 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.694977 | orchestrator | 00:01:37.694 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.694990 | orchestrator | 00:01:37.694 STDOUT terraform:  } 2025-11-23 00:01:37.695042 | orchestrator | 00:01:37.694 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-11-23 00:01:37.695081 | orchestrator | 00:01:37.695 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-11-23 00:01:37.695096 | orchestrator | 00:01:37.695 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.695147 | orchestrator | 00:01:37.695 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.695163 | orchestrator | 00:01:37.695 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.695176 | orchestrator | 00:01:37.695 STDOUT terraform:  + protocol = "udp" 2025-11-23 00:01:37.695261 | orchestrator | 00:01:37.695 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.695337 | orchestrator | 00:01:37.695 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.695359 | orchestrator | 00:01:37.695 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.695372 | orchestrator | 00:01:37.695 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-23 00:01:37.695403 | orchestrator | 00:01:37.695 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.695417 | orchestrator | 00:01:37.695 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.695431 | orchestrator | 00:01:37.695 STDOUT terraform:  } 2025-11-23 00:01:37.695490 | orchestrator | 00:01:37.695 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-11-23 00:01:37.695555 | orchestrator | 00:01:37.695 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-11-23 00:01:37.695572 | orchestrator | 00:01:37.695 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.695584 | orchestrator | 00:01:37.695 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.695649 | orchestrator | 00:01:37.695 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.695662 | orchestrator | 00:01:37.695 STDOUT terraform:  + protocol = "icmp" 2025-11-23 00:01:37.695675 | orchestrator | 00:01:37.695 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.695739 | orchestrator | 00:01:37.695 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.695755 | orchestrator | 00:01:37.695 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.695787 | orchestrator | 00:01:37.695 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-23 00:01:37.695828 | orchestrator | 00:01:37.695 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.695841 | orchestrator | 00:01:37.695 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.695854 | orchestrator | 00:01:37.695 STDOUT terraform:  } 2025-11-23 00:01:37.695908 | orchestrator | 00:01:37.695 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-11-23 00:01:37.695976 | orchestrator | 00:01:37.695 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-11-23 00:01:37.695989 | orchestrator | 00:01:37.695 STDOUT terraform:  + description = "vrrp" 2025-11-23 00:01:37.696002 | orchestrator | 00:01:37.695 STDOUT terraform:  + direction = "ingress" 2025-11-23 00:01:37.696017 | orchestrator | 00:01:37.695 STDOUT terraform:  + ethertype = "IPv4" 2025-11-23 00:01:37.696071 | orchestrator | 00:01:37.696 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.696086 | orchestrator | 00:01:37.696 STDOUT terraform:  + protocol = "112" 2025-11-23 00:01:37.696122 | orchestrator | 00:01:37.696 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.696155 | orchestrator | 00:01:37.696 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-23 00:01:37.696218 | orchestrator | 00:01:37.696 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-23 00:01:37.696278 | orchestrator | 00:01:37.696 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-23 00:01:37.696290 | orchestrator | 00:01:37.696 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-23 00:01:37.696303 | orchestrator | 00:01:37.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.696313 | orchestrator | 00:01:37.696 STDOUT terraform:  } 2025-11-23 00:01:37.696363 | orchestrator | 00:01:37.696 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-11-23 00:01:37.696439 | orchestrator | 00:01:37.696 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-11-23 00:01:37.696452 | orchestrator | 00:01:37.696 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.696466 | orchestrator | 00:01:37.696 STDOUT terraform:  + description = "management security group" 2025-11-23 00:01:37.696514 | orchestrator | 00:01:37.696 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.696529 | orchestrator | 00:01:37.696 STDOUT terraform:  + name = "testbed-management" 2025-11-23 00:01:37.696543 | orchestrator | 00:01:37.696 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.696580 | orchestrator | 00:01:37.696 STDOUT terraform:  + stateful = (known after apply) 2025-11-23 00:01:37.696596 | orchestrator | 00:01:37.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.696608 | orchestrator | 00:01:37.696 STDOUT terraform:  } 2025-11-23 00:01:37.696654 | orchestrator | 00:01:37.696 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-11-23 00:01:37.696687 | orchestrator | 00:01:37.696 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-11-23 00:01:37.696723 | orchestrator | 00:01:37.696 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.696757 | orchestrator | 00:01:37.696 STDOUT terraform:  + description = "node security group" 2025-11-23 00:01:37.696772 | orchestrator | 00:01:37.696 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.696807 | orchestrator | 00:01:37.696 STDOUT terraform:  + name = "testbed-node" 2025-11-23 00:01:37.696823 | orchestrator | 00:01:37.696 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.696857 | orchestrator | 00:01:37.696 STDOUT terraform:  + stateful = (known after apply) 2025-11-23 00:01:37.696898 | orchestrator | 00:01:37.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.696911 | orchestrator | 00:01:37.696 STDOUT terraform:  } 2025-11-23 00:01:37.696949 | orchestrator | 00:01:37.696 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-11-23 00:01:37.696993 | orchestrator | 00:01:37.696 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-11-23 00:01:37.697010 | orchestrator | 00:01:37.696 STDOUT terraform:  + all_tags = (known after apply) 2025-11-23 00:01:37.697049 | orchestrator | 00:01:37.697 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-11-23 00:01:37.697065 | orchestrator | 00:01:37.697 STDOUT terraform:  + dns_nameservers = [ 2025-11-23 00:01:37.697089 | orchestrator | 00:01:37.697 STDOUT terraform:  + "8.8.8.8", 2025-11-23 00:01:37.697121 | orchestrator | 00:01:37.697 STDOUT terraform:  + "9.9.9.9", 2025-11-23 00:01:37.697132 | orchestrator | 00:01:37.697 STDOUT terraform:  ] 2025-11-23 00:01:37.697147 | orchestrator | 00:01:37.697 STDOUT terraform:  + enable_dhcp = true 2025-11-23 00:01:37.697158 | orchestrator | 00:01:37.697 STDOUT terraform:  + gateway_ip = (known after apply) 2025-11-23 00:01:37.697173 | orchestrator | 00:01:37.697 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.697234 | orchestrator | 00:01:37.697 STDOUT terraform:  + ip_version = 4 2025-11-23 00:01:37.697253 | orchestrator | 00:01:37.697 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-11-23 00:01:37.697264 | orchestrator | 00:01:37.697 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-11-23 00:01:37.697278 | orchestrator | 00:01:37.697 STDOUT terraform:  + name = "subnet-testbed-management" 2025-11-23 00:01:37.697292 | orchestrator | 00:01:37.697 STDOUT terraform:  + network_id = (known after apply) 2025-11-23 00:01:37.697335 | orchestrator | 00:01:37.697 STDOUT terraform:  + no_gateway = false 2025-11-23 00:01:37.697352 | orchestrator | 00:01:37.697 STDOUT terraform:  + region = (known after apply) 2025-11-23 00:01:37.697410 | orchestrator | 00:01:37.697 STDOUT terraform:  + service_types = (known after apply) 2025-11-23 00:01:37.697427 | orchestrator | 00:01:37.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-23 00:01:37.697439 | orchestrator | 00:01:37.697 STDOUT terraform:  + allocation_pool { 2025-11-23 00:01:37.697450 | orchestrator | 00:01:37.697 STDOUT terraform:  + end = "192.168.31.250" 2025-11-23 00:01:37.697469 | orchestrator | 00:01:37.697 STDOUT terraform:  + start = "192.168.31.200" 2025-11-23 00:01:37.697481 | orchestrator | 00:01:37.697 STDOUT terraform:  } 2025-11-23 00:01:37.697493 | orchestrator | 00:01:37.697 STDOUT terraform:  } 2025-11-23 00:01:37.697507 | orchestrator | 00:01:37.697 STDOUT terraform:  # terraform_data.image will be created 2025-11-23 00:01:37.697518 | orchestrator | 00:01:37.697 STDOUT terraform:  + resource "terraform_data" "image" { 2025-11-23 00:01:37.697532 | orchestrator | 00:01:37.697 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.697547 | orchestrator | 00:01:37.697 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-23 00:01:37.697561 | orchestrator | 00:01:37.697 STDOUT terraform:  + output = (known after apply) 2025-11-23 00:01:37.697576 | orchestrator | 00:01:37.697 STDOUT terraform:  } 2025-11-23 00:01:37.697590 | orchestrator | 00:01:37.697 STDOUT terraform:  # terraform_data.image_node will be created 2025-11-23 00:01:37.697630 | orchestrator | 00:01:37.697 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-11-23 00:01:37.697647 | orchestrator | 00:01:37.697 STDOUT terraform:  + id = (known after apply) 2025-11-23 00:01:37.697661 | orchestrator | 00:01:37.697 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-23 00:01:37.697676 | orchestrator | 00:01:37.697 STDOUT terraform:  + output = (known after apply) 2025-11-23 00:01:37.697711 | orchestrator | 00:01:37.697 STDOUT terraform:  } 2025-11-23 00:01:37.697726 | orchestrator | 00:01:37.697 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-11-23 00:01:37.697746 | orchestrator | 00:01:37.697 STDOUT terraform: Changes to Outputs: 2025-11-23 00:01:37.697761 | orchestrator | 00:01:37.697 STDOUT terraform:  + manager_address = (sensitive value) 2025-11-23 00:01:37.697772 | orchestrator | 00:01:37.697 STDOUT terraform:  + private_key = (sensitive value) 2025-11-23 00:01:37.862130 | orchestrator | 00:01:37.860 STDOUT terraform: terraform_data.image: Creating... 2025-11-23 00:01:37.862245 | orchestrator | 00:01:37.860 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=b53ecfd6-25ee-6eaf-cffc-39b9eec1e453] 2025-11-23 00:01:37.866149 | orchestrator | 00:01:37.862 STDOUT terraform: terraform_data.image_node: Creating... 2025-11-23 00:01:37.867473 | orchestrator | 00:01:37.867 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=4744a122-3780-b48e-7a8a-bbda2ce1ba4e] 2025-11-23 00:01:37.880841 | orchestrator | 00:01:37.880 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-11-23 00:01:37.881927 | orchestrator | 00:01:37.881 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-11-23 00:01:37.892833 | orchestrator | 00:01:37.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-11-23 00:01:37.906094 | orchestrator | 00:01:37.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-11-23 00:01:37.906156 | orchestrator | 00:01:37.899 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-11-23 00:01:37.906163 | orchestrator | 00:01:37.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-11-23 00:01:37.906168 | orchestrator | 00:01:37.904 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-11-23 00:01:37.908746 | orchestrator | 00:01:37.908 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-11-23 00:01:37.908930 | orchestrator | 00:01:37.908 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-11-23 00:01:37.910117 | orchestrator | 00:01:37.910 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-11-23 00:01:38.410887 | orchestrator | 00:01:38.410 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-23 00:01:38.412020 | orchestrator | 00:01:38.411 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-23 00:01:38.417745 | orchestrator | 00:01:38.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-11-23 00:01:38.418886 | orchestrator | 00:01:38.418 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-11-23 00:01:38.422039 | orchestrator | 00:01:38.421 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-11-23 00:01:38.426405 | orchestrator | 00:01:38.426 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-11-23 00:01:38.896758 | orchestrator | 00:01:38.896 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=d5964c65-07bb-4535-be5f-a670dec6d1c7] 2025-11-23 00:01:38.902772 | orchestrator | 00:01:38.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-11-23 00:01:41.564105 | orchestrator | 00:01:41.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=9bb12db9-718e-4660-80a8-4889452babe1] 2025-11-23 00:01:41.575432 | orchestrator | 00:01:41.575 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-11-23 00:01:41.602528 | orchestrator | 00:01:41.602 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=8067a508-692c-4377-81f7-31a1d1b351f4] 2025-11-23 00:01:41.608696 | orchestrator | 00:01:41.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=6228c6cf-84a4-441a-8cc9-9597cabd600f] 2025-11-23 00:01:41.608955 | orchestrator | 00:01:41.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-11-23 00:01:41.617521 | orchestrator | 00:01:41.617 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-11-23 00:01:41.627352 | orchestrator | 00:01:41.627 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=0964e8b1-b5e3-4f47-9890-2712ab1da39b] 2025-11-23 00:01:41.632933 | orchestrator | 00:01:41.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-11-23 00:01:41.727976 | orchestrator | 00:01:41.727 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=8a2d036f-63dd-4edf-8f40-5cb15ccba33f] 2025-11-23 00:01:41.728294 | orchestrator | 00:01:41.727 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=2b7e306c-9c4d-42db-9fc4-69fec959c356] 2025-11-23 00:01:41.729046 | orchestrator | 00:01:41.728 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=d3bc663b-2fb7-4f3a-80f5-8fec376801b0] 2025-11-23 00:01:41.735377 | orchestrator | 00:01:41.734 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=5ed148ed-cabe-49ec-beea-f05b5632a7aa] 2025-11-23 00:01:41.738763 | orchestrator | 00:01:41.738 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-11-23 00:01:41.742381 | orchestrator | 00:01:41.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-11-23 00:01:41.745230 | orchestrator | 00:01:41.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=90348fbb-4b76-43ea-ac95-9b7258782d3f] 2025-11-23 00:01:41.750957 | orchestrator | 00:01:41.750 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-11-23 00:01:41.751011 | orchestrator | 00:01:41.750 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-11-23 00:01:41.758407 | orchestrator | 00:01:41.758 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=bd75852b1aa25df0f8e18e3c5cb2cd2d28547390] 2025-11-23 00:01:41.759245 | orchestrator | 00:01:41.759 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-11-23 00:01:41.759619 | orchestrator | 00:01:41.759 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=8527b74306f977bacc14722b7f046cf929755955] 2025-11-23 00:01:42.255587 | orchestrator | 00:01:42.255 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=9b068fe4-9aa6-4103-84ba-dc9167f04e78] 2025-11-23 00:01:42.875975 | orchestrator | 00:01:42.875 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=6d95a90f-f3ec-4272-8dd1-d352d19fda86] 2025-11-23 00:01:42.883642 | orchestrator | 00:01:42.883 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-11-23 00:01:45.006148 | orchestrator | 00:01:45.005 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=552cdbe4-d2a6-4e41-9a4e-2added6a6c3a] 2025-11-23 00:01:45.126281 | orchestrator | 00:01:45.125 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=3a44215a-1b34-44d7-81a4-9c2ea4da2999] 2025-11-23 00:01:45.167427 | orchestrator | 00:01:45.167 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=624b486d-3dba-4024-bac7-13317dda40b1] 2025-11-23 00:01:45.197300 | orchestrator | 00:01:45.197 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=0839421d-4e00-46fc-9b28-0fb70e6d13db] 2025-11-23 00:01:45.243446 | orchestrator | 00:01:45.243 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=fdc43c48-f9ac-4c73-b149-10da09bc2a11] 2025-11-23 00:01:45.249257 | orchestrator | 00:01:45.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=48181c8e-5a9a-4def-86fd-b6a2b5ab4b67] 2025-11-23 00:01:45.625660 | orchestrator | 00:01:45.625 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=d64ec922-b290-4bfd-9b5e-41a5ff73e812] 2025-11-23 00:01:45.632951 | orchestrator | 00:01:45.632 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-11-23 00:01:45.633042 | orchestrator | 00:01:45.632 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-11-23 00:01:45.634099 | orchestrator | 00:01:45.633 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-11-23 00:01:45.879599 | orchestrator | 00:01:45.879 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=2fd1e65c-6e89-4c7f-840c-afec2c10e0ce] 2025-11-23 00:01:45.889179 | orchestrator | 00:01:45.888 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-11-23 00:01:45.892694 | orchestrator | 00:01:45.892 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-11-23 00:01:45.893953 | orchestrator | 00:01:45.893 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-11-23 00:01:45.894735 | orchestrator | 00:01:45.894 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-11-23 00:01:45.896398 | orchestrator | 00:01:45.896 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-11-23 00:01:45.896767 | orchestrator | 00:01:45.896 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ba2b3ed6-48be-404c-a9b3-e636e73b0558] 2025-11-23 00:01:45.898417 | orchestrator | 00:01:45.898 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-11-23 00:01:45.909648 | orchestrator | 00:01:45.908 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-11-23 00:01:45.910779 | orchestrator | 00:01:45.910 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-11-23 00:01:45.911566 | orchestrator | 00:01:45.911 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-11-23 00:01:46.097499 | orchestrator | 00:01:46.097 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=d68ca0f7-38d9-463e-b006-1d216b6adcfb] 2025-11-23 00:01:46.106005 | orchestrator | 00:01:46.105 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-11-23 00:01:46.373321 | orchestrator | 00:01:46.372 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=ad1d35b6-f4f9-449e-9724-24ee4d7837ef] 2025-11-23 00:01:46.383472 | orchestrator | 00:01:46.383 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-11-23 00:01:46.526789 | orchestrator | 00:01:46.526 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=2495bad2-fdf9-42ba-ad0a-669365662eff] 2025-11-23 00:01:46.537376 | orchestrator | 00:01:46.537 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-11-23 00:01:46.551406 | orchestrator | 00:01:46.551 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e7227fe4-510d-42c0-8bf9-7a64be9f52f6] 2025-11-23 00:01:46.568404 | orchestrator | 00:01:46.568 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-11-23 00:01:46.775594 | orchestrator | 00:01:46.775 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=bf720b01-bb36-4442-b84b-d67228048cd9] 2025-11-23 00:01:46.788563 | orchestrator | 00:01:46.788 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-11-23 00:01:46.929256 | orchestrator | 00:01:46.928 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=058c8223-9072-4667-ba61-fb52f5d3529e] 2025-11-23 00:01:46.945599 | orchestrator | 00:01:46.945 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-11-23 00:01:47.011340 | orchestrator | 00:01:47.010 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=bf5aa34c-b8c7-4b16-9d2c-1fae20358e48] 2025-11-23 00:01:47.023981 | orchestrator | 00:01:47.023 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-11-23 00:01:47.218301 | orchestrator | 00:01:47.218 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=da418718-291f-4d0a-ae96-8f1130111cdf] 2025-11-23 00:01:47.434392 | orchestrator | 00:01:47.433 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=a4094702-4847-4ac0-9614-6b88a2775252] 2025-11-23 00:01:47.435967 | orchestrator | 00:01:47.435 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=16c19142-7391-470b-835d-11be1bd2847b] 2025-11-23 00:01:47.448633 | orchestrator | 00:01:47.448 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=0661a76a-f4a0-4b39-b71a-baaa5b5ae921] 2025-11-23 00:01:47.569543 | orchestrator | 00:01:47.569 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=eed76004-bdbc-4a02-9c9f-2671628ad413] 2025-11-23 00:01:47.592424 | orchestrator | 00:01:47.591 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=ab3eb3ea-8388-4ea4-a6e7-be75bd666cf3] 2025-11-23 00:01:47.607110 | orchestrator | 00:01:47.606 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=f6dcbdba-13af-46cb-9555-819a9fc258d1] 2025-11-23 00:01:47.789047 | orchestrator | 00:01:47.788 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=07cc13b4-a952-4b94-80bd-932a83d1b64b] 2025-11-23 00:01:47.807454 | orchestrator | 00:01:47.807 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=5899e256-7ea2-4b16-8cdc-adbf417d203d] 2025-11-23 00:01:51.520238 | orchestrator | 00:01:51.519 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=42195001-611a-404e-9d55-50dc14eba0c1] 2025-11-23 00:01:51.541968 | orchestrator | 00:01:51.541 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-11-23 00:01:51.552382 | orchestrator | 00:01:51.552 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-11-23 00:01:51.560996 | orchestrator | 00:01:51.560 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-11-23 00:01:51.570800 | orchestrator | 00:01:51.570 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-11-23 00:01:51.576491 | orchestrator | 00:01:51.576 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-11-23 00:01:51.579429 | orchestrator | 00:01:51.579 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-11-23 00:01:51.583528 | orchestrator | 00:01:51.583 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-11-23 00:01:53.042329 | orchestrator | 00:01:53.041 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=d6864283-01e5-4a72-9990-fbca556312be] 2025-11-23 00:01:53.053506 | orchestrator | 00:01:53.053 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-11-23 00:01:53.054132 | orchestrator | 00:01:53.053 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-11-23 00:01:53.058593 | orchestrator | 00:01:53.058 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=531900af26bddb96bf4907c358cf6b6fd7eb00aa] 2025-11-23 00:01:53.063168 | orchestrator | 00:01:53.063 STDOUT terraform: local_file.inventory: Creating... 2025-11-23 00:01:53.066523 | orchestrator | 00:01:53.066 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=0f7f104cf9b98e1a19edbee04d820fb7286929d1] 2025-11-23 00:01:53.921900 | orchestrator | 00:01:53.921 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=d6864283-01e5-4a72-9990-fbca556312be] 2025-11-23 00:02:01.551719 | orchestrator | 00:02:01.551 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-11-23 00:02:01.562163 | orchestrator | 00:02:01.561 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-11-23 00:02:01.572749 | orchestrator | 00:02:01.572 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-11-23 00:02:01.578870 | orchestrator | 00:02:01.578 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-11-23 00:02:01.583429 | orchestrator | 00:02:01.583 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-11-23 00:02:01.587644 | orchestrator | 00:02:01.587 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-11-23 00:02:11.552813 | orchestrator | 00:02:11.552 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-11-23 00:02:11.563115 | orchestrator | 00:02:11.562 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-11-23 00:02:11.573678 | orchestrator | 00:02:11.573 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-11-23 00:02:11.579724 | orchestrator | 00:02:11.579 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-11-23 00:02:11.584170 | orchestrator | 00:02:11.583 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-11-23 00:02:11.588879 | orchestrator | 00:02:11.588 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-11-23 00:02:21.553298 | orchestrator | 00:02:21.553 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-11-23 00:02:21.563692 | orchestrator | 00:02:21.563 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-11-23 00:02:21.574376 | orchestrator | 00:02:21.573 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-11-23 00:02:21.580674 | orchestrator | 00:02:21.580 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-11-23 00:02:21.584910 | orchestrator | 00:02:21.584 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-11-23 00:02:21.589161 | orchestrator | 00:02:21.588 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-11-23 00:02:22.172439 | orchestrator | 00:02:22.172 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=94599fe5-9667-4b66-8f1f-9f21c94e53f8] 2025-11-23 00:02:22.223323 | orchestrator | 00:02:22.223 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=aa7b6e77-a108-402b-8452-d5a52d198d17] 2025-11-23 00:02:22.251217 | orchestrator | 00:02:22.250 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=b38eae06-467b-4d45-92f2-91c01234dfe7] 2025-11-23 00:02:22.321462 | orchestrator | 00:02:22.321 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=0ff8ed68-67aa-4628-ba09-6872be38e3d9] 2025-11-23 00:02:31.581043 | orchestrator | 00:02:31.580 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-11-23 00:02:31.585528 | orchestrator | 00:02:31.585 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-11-23 00:02:32.590255 | orchestrator | 00:02:32.589 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=56af6d56-680f-4f10-82e1-802947779cf2] 2025-11-23 00:02:32.755559 | orchestrator | 00:02:32.755 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=bdc74324-cae6-4c88-b8b4-fecef74589e5] 2025-11-23 00:02:32.771638 | orchestrator | 00:02:32.771 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-11-23 00:02:32.775430 | orchestrator | 00:02:32.775 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=805577578900409100] 2025-11-23 00:02:32.798887 | orchestrator | 00:02:32.798 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-11-23 00:02:32.799013 | orchestrator | 00:02:32.798 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-11-23 00:02:32.800518 | orchestrator | 00:02:32.800 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-11-23 00:02:32.805545 | orchestrator | 00:02:32.805 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-11-23 00:02:32.822362 | orchestrator | 00:02:32.821 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-11-23 00:02:32.832942 | orchestrator | 00:02:32.832 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-11-23 00:02:32.838356 | orchestrator | 00:02:32.837 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-11-23 00:02:32.838437 | orchestrator | 00:02:32.838 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-11-23 00:02:32.839026 | orchestrator | 00:02:32.838 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-11-23 00:02:32.850986 | orchestrator | 00:02:32.850 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-11-23 00:02:36.196223 | orchestrator | 00:02:36.195 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=94599fe5-9667-4b66-8f1f-9f21c94e53f8/6228c6cf-84a4-441a-8cc9-9597cabd600f] 2025-11-23 00:02:36.199498 | orchestrator | 00:02:36.199 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=bdc74324-cae6-4c88-b8b4-fecef74589e5/8a2d036f-63dd-4edf-8f40-5cb15ccba33f] 2025-11-23 00:02:36.229397 | orchestrator | 00:02:36.228 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=94599fe5-9667-4b66-8f1f-9f21c94e53f8/2b7e306c-9c4d-42db-9fc4-69fec959c356] 2025-11-23 00:02:36.246167 | orchestrator | 00:02:36.245 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=56af6d56-680f-4f10-82e1-802947779cf2/90348fbb-4b76-43ea-ac95-9b7258782d3f] 2025-11-23 00:02:36.252594 | orchestrator | 00:02:36.252 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=bdc74324-cae6-4c88-b8b4-fecef74589e5/8067a508-692c-4377-81f7-31a1d1b351f4] 2025-11-23 00:02:36.271580 | orchestrator | 00:02:36.270 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=94599fe5-9667-4b66-8f1f-9f21c94e53f8/d3bc663b-2fb7-4f3a-80f5-8fec376801b0] 2025-11-23 00:02:36.288536 | orchestrator | 00:02:36.287 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=56af6d56-680f-4f10-82e1-802947779cf2/0964e8b1-b5e3-4f47-9890-2712ab1da39b] 2025-11-23 00:02:42.374327 | orchestrator | 00:02:42.373 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=bdc74324-cae6-4c88-b8b4-fecef74589e5/9bb12db9-718e-4660-80a8-4889452babe1] 2025-11-23 00:02:42.387995 | orchestrator | 00:02:42.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=56af6d56-680f-4f10-82e1-802947779cf2/5ed148ed-cabe-49ec-beea-f05b5632a7aa] 2025-11-23 00:02:42.840844 | orchestrator | 00:02:42.840 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-11-23 00:02:52.842121 | orchestrator | 00:02:52.841 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-11-23 00:02:53.208834 | orchestrator | 00:02:53.208 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=9c5b71e0-11be-4f27-b926-c2b895748942] 2025-11-23 00:02:53.228211 | orchestrator | 00:02:53.227 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-11-23 00:02:53.228289 | orchestrator | 00:02:53.228 STDOUT terraform: Outputs: 2025-11-23 00:02:53.228304 | orchestrator | 00:02:53.228 STDOUT terraform: manager_address = 2025-11-23 00:02:53.228324 | orchestrator | 00:02:53.228 STDOUT terraform: private_key = 2025-11-23 00:02:53.465649 | orchestrator | ok: Runtime: 0:01:21.357585 2025-11-23 00:02:53.499869 | 2025-11-23 00:02:53.500005 | TASK [Create infrastructure (stable)] 2025-11-23 00:02:54.033612 | orchestrator | skipping: Conditional result was False 2025-11-23 00:02:54.057866 | 2025-11-23 00:02:54.058096 | TASK [Fetch manager address] 2025-11-23 00:02:54.497208 | orchestrator | ok 2025-11-23 00:02:54.508471 | 2025-11-23 00:02:54.508641 | TASK [Set manager_host address] 2025-11-23 00:02:54.591669 | orchestrator | ok 2025-11-23 00:02:54.602212 | 2025-11-23 00:02:54.602378 | LOOP [Update ansible collections] 2025-11-23 00:02:58.031351 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-23 00:02:58.031774 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-23 00:02:58.031855 | orchestrator | Starting galaxy collection install process 2025-11-23 00:02:58.031895 | orchestrator | Process install dependency map 2025-11-23 00:02:58.035284 | orchestrator | Starting collection install process 2025-11-23 00:02:58.035407 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-11-23 00:02:58.035458 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-11-23 00:02:58.035501 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-11-23 00:02:58.035647 | orchestrator | ok: Item: commons Runtime: 0:00:03.019808 2025-11-23 00:02:59.251207 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-23 00:02:59.251393 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-23 00:02:59.251435 | orchestrator | Starting galaxy collection install process 2025-11-23 00:02:59.251463 | orchestrator | Process install dependency map 2025-11-23 00:02:59.251489 | orchestrator | Starting collection install process 2025-11-23 00:02:59.251513 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-11-23 00:02:59.251537 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-11-23 00:02:59.251559 | orchestrator | osism.services:999.0.0 was installed successfully 2025-11-23 00:02:59.251596 | orchestrator | ok: Item: services Runtime: 0:00:00.939511 2025-11-23 00:02:59.273870 | 2025-11-23 00:02:59.274007 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-23 00:03:09.807728 | orchestrator | ok 2025-11-23 00:03:09.815973 | 2025-11-23 00:03:09.816090 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-23 00:04:09.867901 | orchestrator | ok 2025-11-23 00:04:09.879541 | 2025-11-23 00:04:09.879692 | TASK [Fetch manager ssh hostkey] 2025-11-23 00:04:11.455771 | orchestrator | Output suppressed because no_log was given 2025-11-23 00:04:11.471809 | 2025-11-23 00:04:11.471998 | TASK [Get ssh keypair from terraform environment] 2025-11-23 00:04:12.012515 | orchestrator | ok: Runtime: 0:00:00.011767 2025-11-23 00:04:12.029783 | 2025-11-23 00:04:12.029955 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-23 00:04:12.080704 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-11-23 00:04:12.092237 | 2025-11-23 00:04:12.093267 | TASK [Run manager part 0] 2025-11-23 00:04:13.701496 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-23 00:04:13.930630 | orchestrator | 2025-11-23 00:04:13.930722 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-11-23 00:04:13.930740 | orchestrator | 2025-11-23 00:04:13.930821 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-11-23 00:04:15.539733 | orchestrator | ok: [testbed-manager] 2025-11-23 00:04:15.539781 | orchestrator | 2025-11-23 00:04:15.539802 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-23 00:04:15.539811 | orchestrator | 2025-11-23 00:04:15.539819 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:04:17.400749 | orchestrator | ok: [testbed-manager] 2025-11-23 00:04:17.400806 | orchestrator | 2025-11-23 00:04:17.400818 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-23 00:04:18.074406 | orchestrator | ok: [testbed-manager] 2025-11-23 00:04:18.074468 | orchestrator | 2025-11-23 00:04:18.074477 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-23 00:04:18.134186 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:04:18.134246 | orchestrator | 2025-11-23 00:04:18.134257 | orchestrator | TASK [Update package cache] **************************************************** 2025-11-23 00:04:18.175692 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:04:18.175745 | orchestrator | 2025-11-23 00:04:18.175757 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-23 00:04:18.205453 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:04:18.205502 | orchestrator | 2025-11-23 00:04:18.205511 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-23 00:04:18.233724 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:04:18.233772 | orchestrator | 2025-11-23 00:04:18.233781 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-23 00:04:18.266837 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:04:18.266895 | orchestrator | 2025-11-23 00:04:18.266903 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-11-23 00:04:18.297019 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:04:18.297064 | orchestrator | 2025-11-23 00:04:18.297071 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-11-23 00:04:18.334934 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:04:18.334979 | orchestrator | 2025-11-23 00:04:18.334987 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-11-23 00:04:19.046875 | orchestrator | changed: [testbed-manager] 2025-11-23 00:04:19.046924 | orchestrator | 2025-11-23 00:04:19.046933 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-11-23 00:06:36.219394 | orchestrator | changed: [testbed-manager] 2025-11-23 00:06:36.219470 | orchestrator | 2025-11-23 00:06:36.219490 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-23 00:07:46.218378 | orchestrator | changed: [testbed-manager] 2025-11-23 00:07:46.218588 | orchestrator | 2025-11-23 00:07:46.218608 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-23 00:08:07.300304 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:07.300422 | orchestrator | 2025-11-23 00:08:07.300443 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-23 00:08:14.528584 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:14.528702 | orchestrator | 2025-11-23 00:08:14.528730 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-23 00:08:14.580921 | orchestrator | ok: [testbed-manager] 2025-11-23 00:08:14.581064 | orchestrator | 2025-11-23 00:08:14.581094 | orchestrator | TASK [Get current user] ******************************************************** 2025-11-23 00:08:15.319618 | orchestrator | ok: [testbed-manager] 2025-11-23 00:08:15.319699 | orchestrator | 2025-11-23 00:08:15.319713 | orchestrator | TASK [Create venv directory] *************************************************** 2025-11-23 00:08:16.023791 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:16.023876 | orchestrator | 2025-11-23 00:08:16.023890 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-11-23 00:08:21.740310 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:21.740435 | orchestrator | 2025-11-23 00:08:21.740495 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-11-23 00:08:27.225575 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:27.225625 | orchestrator | 2025-11-23 00:08:27.225636 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-11-23 00:08:29.679906 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:29.679975 | orchestrator | 2025-11-23 00:08:29.679989 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-11-23 00:08:31.260246 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:31.260316 | orchestrator | 2025-11-23 00:08:31.260328 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-11-23 00:08:32.279058 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-23 00:08:32.279160 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-23 00:08:32.279177 | orchestrator | 2025-11-23 00:08:32.279189 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-11-23 00:08:32.324510 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-23 00:08:32.324595 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-23 00:08:32.324610 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-23 00:08:32.324622 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-23 00:08:38.405172 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-23 00:08:38.405210 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-23 00:08:38.405215 | orchestrator | 2025-11-23 00:08:38.405221 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-11-23 00:08:38.955722 | orchestrator | changed: [testbed-manager] 2025-11-23 00:08:38.955805 | orchestrator | 2025-11-23 00:08:38.955820 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-11-23 00:09:57.302688 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-11-23 00:09:57.302782 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-11-23 00:09:57.302797 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-11-23 00:09:57.302819 | orchestrator | 2025-11-23 00:09:57.302829 | orchestrator | TASK [Install local collections] *********************************************** 2025-11-23 00:09:59.444421 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-11-23 00:09:59.444590 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-11-23 00:09:59.444610 | orchestrator | 2025-11-23 00:09:59.444622 | orchestrator | PLAY [Create operator user] **************************************************** 2025-11-23 00:09:59.444635 | orchestrator | 2025-11-23 00:09:59.444646 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:10:00.786638 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:00.786732 | orchestrator | 2025-11-23 00:10:00.786751 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-23 00:10:00.833637 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:00.833726 | orchestrator | 2025-11-23 00:10:00.833748 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-23 00:10:00.934772 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:00.934871 | orchestrator | 2025-11-23 00:10:00.934890 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-23 00:10:01.701443 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:01.701556 | orchestrator | 2025-11-23 00:10:01.701581 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-23 00:10:02.357601 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:02.357671 | orchestrator | 2025-11-23 00:10:02.357682 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-23 00:10:03.639468 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-11-23 00:10:03.639511 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-11-23 00:10:03.639518 | orchestrator | 2025-11-23 00:10:03.639533 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-23 00:10:04.976611 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:04.976722 | orchestrator | 2025-11-23 00:10:04.976739 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-23 00:10:06.641050 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-11-23 00:10:06.641146 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-11-23 00:10:06.641161 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-11-23 00:10:06.641173 | orchestrator | 2025-11-23 00:10:06.641185 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-23 00:10:06.709269 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:06.709334 | orchestrator | 2025-11-23 00:10:06.709342 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2025-11-23 00:10:06.788053 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:06.788131 | orchestrator | 2025-11-23 00:10:06.788145 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-23 00:10:07.321854 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:07.321899 | orchestrator | 2025-11-23 00:10:07.321908 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-23 00:10:07.412320 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:07.412351 | orchestrator | 2025-11-23 00:10:07.412356 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-23 00:10:08.233982 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-23 00:10:08.234129 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:08.234146 | orchestrator | 2025-11-23 00:10:08.234159 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-23 00:10:08.264711 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:08.264801 | orchestrator | 2025-11-23 00:10:08.264818 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-23 00:10:08.307904 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:08.307997 | orchestrator | 2025-11-23 00:10:08.308038 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-23 00:10:08.346732 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:08.346824 | orchestrator | 2025-11-23 00:10:08.346844 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-23 00:10:08.420836 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:08.420931 | orchestrator | 2025-11-23 00:10:08.420947 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-23 00:10:09.095215 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:09.095299 | orchestrator | 2025-11-23 00:10:09.095315 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-23 00:10:09.095326 | orchestrator | 2025-11-23 00:10:09.095337 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:10:10.415408 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:10.415446 | orchestrator | 2025-11-23 00:10:10.415451 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-11-23 00:10:11.348523 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:11.348560 | orchestrator | 2025-11-23 00:10:11.348566 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:10:11.348572 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2025-11-23 00:10:11.348576 | orchestrator | 2025-11-23 00:10:11.853285 | orchestrator | ok: Runtime: 0:05:59.045616 2025-11-23 00:10:11.870900 | 2025-11-23 00:10:11.871078 | TASK [Point out that the log in on the manager is now possible] 2025-11-23 00:10:11.913665 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-11-23 00:10:11.924307 | 2025-11-23 00:10:11.924444 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-23 00:10:11.960623 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-11-23 00:10:11.970517 | 2025-11-23 00:10:11.970728 | TASK [Run manager part 1 + 2] 2025-11-23 00:10:12.800213 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-23 00:10:12.857111 | orchestrator | 2025-11-23 00:10:12.857207 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-11-23 00:10:12.857241 | orchestrator | 2025-11-23 00:10:12.857310 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:10:15.647540 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:15.647774 | orchestrator | 2025-11-23 00:10:15.647839 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-23 00:10:15.683778 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:15.683857 | orchestrator | 2025-11-23 00:10:15.683876 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-23 00:10:15.723825 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:15.723908 | orchestrator | 2025-11-23 00:10:15.723924 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-23 00:10:15.775492 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:15.775549 | orchestrator | 2025-11-23 00:10:15.775557 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-23 00:10:15.866465 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:15.866560 | orchestrator | 2025-11-23 00:10:15.866580 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-23 00:10:15.932182 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:15.932265 | orchestrator | 2025-11-23 00:10:15.932282 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-23 00:10:15.989775 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-11-23 00:10:15.989862 | orchestrator | 2025-11-23 00:10:15.989877 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-23 00:10:16.720781 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:16.720832 | orchestrator | 2025-11-23 00:10:16.720842 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-23 00:10:16.776107 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:16.776153 | orchestrator | 2025-11-23 00:10:16.776160 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-23 00:10:18.114157 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:18.114217 | orchestrator | 2025-11-23 00:10:18.114229 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-23 00:10:18.651977 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:18.652078 | orchestrator | 2025-11-23 00:10:18.652128 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-23 00:10:19.732581 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:19.732635 | orchestrator | 2025-11-23 00:10:19.732642 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-23 00:10:37.663936 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:37.664200 | orchestrator | 2025-11-23 00:10:37.664225 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-23 00:10:38.318612 | orchestrator | ok: [testbed-manager] 2025-11-23 00:10:38.318702 | orchestrator | 2025-11-23 00:10:38.318718 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-23 00:10:38.402083 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:38.402169 | orchestrator | 2025-11-23 00:10:38.402185 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-11-23 00:10:39.289522 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:39.289615 | orchestrator | 2025-11-23 00:10:39.289640 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-11-23 00:10:40.171529 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:40.172272 | orchestrator | 2025-11-23 00:10:40.172338 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-11-23 00:10:40.718668 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:40.718759 | orchestrator | 2025-11-23 00:10:40.718776 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-11-23 00:10:40.761772 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-23 00:10:40.761867 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-23 00:10:40.761878 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-23 00:10:40.761888 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-23 00:10:42.961589 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:42.961657 | orchestrator | 2025-11-23 00:10:42.961673 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-11-23 00:10:50.835360 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-11-23 00:10:51.304153 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-11-23 00:10:51.304228 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-11-23 00:10:51.304244 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-11-23 00:10:51.304268 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-11-23 00:10:51.304279 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-11-23 00:10:51.304291 | orchestrator | 2025-11-23 00:10:51.304303 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-11-23 00:10:52.156176 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:52.156228 | orchestrator | 2025-11-23 00:10:52.156241 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-11-23 00:10:52.196246 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:52.196281 | orchestrator | 2025-11-23 00:10:52.196287 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-11-23 00:10:55.062256 | orchestrator | changed: [testbed-manager] 2025-11-23 00:10:55.062348 | orchestrator | 2025-11-23 00:10:55.062364 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-11-23 00:10:55.108373 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:10:55.108475 | orchestrator | 2025-11-23 00:10:55.108493 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-11-23 00:12:24.470080 | orchestrator | changed: [testbed-manager] 2025-11-23 00:12:24.470213 | orchestrator | 2025-11-23 00:12:24.470234 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-23 00:12:25.451978 | orchestrator | ok: [testbed-manager] 2025-11-23 00:12:25.452020 | orchestrator | 2025-11-23 00:12:25.452027 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:12:25.452034 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-11-23 00:12:25.452039 | orchestrator | 2025-11-23 00:12:25.868053 | orchestrator | ok: Runtime: 0:02:13.285484 2025-11-23 00:12:25.885278 | 2025-11-23 00:12:25.885428 | TASK [Reboot manager] 2025-11-23 00:12:27.422228 | orchestrator | ok: Runtime: 0:00:00.925441 2025-11-23 00:12:27.439979 | 2025-11-23 00:12:27.440167 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-23 00:12:41.907246 | orchestrator | ok 2025-11-23 00:12:41.918109 | 2025-11-23 00:12:41.918252 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-23 00:13:41.968644 | orchestrator | ok 2025-11-23 00:13:41.978972 | 2025-11-23 00:13:41.979173 | TASK [Deploy manager + bootstrap nodes] 2025-11-23 00:13:45.281427 | orchestrator | 2025-11-23 00:13:45.281621 | orchestrator | # DEPLOY MANAGER 2025-11-23 00:13:45.281648 | orchestrator | 2025-11-23 00:13:45.281663 | orchestrator | + set -e 2025-11-23 00:13:45.281677 | orchestrator | + echo 2025-11-23 00:13:45.281691 | orchestrator | + echo '# DEPLOY MANAGER' 2025-11-23 00:13:45.281708 | orchestrator | + echo 2025-11-23 00:13:45.281758 | orchestrator | + cat /opt/manager-vars.sh 2025-11-23 00:13:45.284739 | orchestrator | export NUMBER_OF_NODES=6 2025-11-23 00:13:45.284766 | orchestrator | 2025-11-23 00:13:45.284778 | orchestrator | export CEPH_VERSION=reef 2025-11-23 00:13:45.284792 | orchestrator | export CONFIGURATION_VERSION=main 2025-11-23 00:13:45.284804 | orchestrator | export MANAGER_VERSION=latest 2025-11-23 00:13:45.284826 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-11-23 00:13:45.284837 | orchestrator | 2025-11-23 00:13:45.284855 | orchestrator | export ARA=false 2025-11-23 00:13:45.284867 | orchestrator | export DEPLOY_MODE=manager 2025-11-23 00:13:45.284916 | orchestrator | export TEMPEST=true 2025-11-23 00:13:45.284929 | orchestrator | export IS_ZUUL=true 2025-11-23 00:13:45.284940 | orchestrator | 2025-11-23 00:13:45.284958 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.118 2025-11-23 00:13:45.284970 | orchestrator | export EXTERNAL_API=false 2025-11-23 00:13:45.284982 | orchestrator | 2025-11-23 00:13:45.284993 | orchestrator | export IMAGE_USER=ubuntu 2025-11-23 00:13:45.285007 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-11-23 00:13:45.285018 | orchestrator | 2025-11-23 00:13:45.285029 | orchestrator | export CEPH_STACK=ceph-ansible 2025-11-23 00:13:45.285046 | orchestrator | 2025-11-23 00:13:45.285058 | orchestrator | + echo 2025-11-23 00:13:45.285070 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-23 00:13:45.285844 | orchestrator | ++ export INTERACTIVE=false 2025-11-23 00:13:45.285861 | orchestrator | ++ INTERACTIVE=false 2025-11-23 00:13:45.285909 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-23 00:13:45.285931 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-23 00:13:45.285956 | orchestrator | + source /opt/manager-vars.sh 2025-11-23 00:13:45.285970 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-23 00:13:45.285983 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-23 00:13:45.286065 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-23 00:13:45.286082 | orchestrator | ++ CEPH_VERSION=reef 2025-11-23 00:13:45.286095 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-23 00:13:45.286107 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-23 00:13:45.286118 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-23 00:13:45.286129 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-23 00:13:45.286139 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-23 00:13:45.286159 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-23 00:13:45.286171 | orchestrator | ++ export ARA=false 2025-11-23 00:13:45.286182 | orchestrator | ++ ARA=false 2025-11-23 00:13:45.286197 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-23 00:13:45.286209 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-23 00:13:45.286220 | orchestrator | ++ export TEMPEST=true 2025-11-23 00:13:45.286231 | orchestrator | ++ TEMPEST=true 2025-11-23 00:13:45.286242 | orchestrator | ++ export IS_ZUUL=true 2025-11-23 00:13:45.286253 | orchestrator | ++ IS_ZUUL=true 2025-11-23 00:13:45.286264 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.118 2025-11-23 00:13:45.286275 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.118 2025-11-23 00:13:45.286287 | orchestrator | ++ export EXTERNAL_API=false 2025-11-23 00:13:45.286298 | orchestrator | ++ EXTERNAL_API=false 2025-11-23 00:13:45.286313 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-23 00:13:45.286324 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-23 00:13:45.286335 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-23 00:13:45.286346 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-23 00:13:45.286357 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-23 00:13:45.286368 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-23 00:13:45.286380 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-11-23 00:13:45.344833 | orchestrator | + docker version 2025-11-23 00:13:45.588796 | orchestrator | Client: Docker Engine - Community 2025-11-23 00:13:45.588957 | orchestrator | Version: 27.5.1 2025-11-23 00:13:45.588976 | orchestrator | API version: 1.47 2025-11-23 00:13:45.588990 | orchestrator | Go version: go1.22.11 2025-11-23 00:13:45.589002 | orchestrator | Git commit: 9f9e405 2025-11-23 00:13:45.589013 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-23 00:13:45.589026 | orchestrator | OS/Arch: linux/amd64 2025-11-23 00:13:45.589036 | orchestrator | Context: default 2025-11-23 00:13:45.589047 | orchestrator | 2025-11-23 00:13:45.589059 | orchestrator | Server: Docker Engine - Community 2025-11-23 00:13:45.589070 | orchestrator | Engine: 2025-11-23 00:13:45.589081 | orchestrator | Version: 27.5.1 2025-11-23 00:13:45.589092 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-11-23 00:13:45.589137 | orchestrator | Go version: go1.22.11 2025-11-23 00:13:45.589149 | orchestrator | Git commit: 4c9b3b0 2025-11-23 00:13:45.589160 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-23 00:13:45.589170 | orchestrator | OS/Arch: linux/amd64 2025-11-23 00:13:45.589181 | orchestrator | Experimental: false 2025-11-23 00:13:45.589192 | orchestrator | containerd: 2025-11-23 00:13:45.589203 | orchestrator | Version: v2.1.5 2025-11-23 00:13:45.589214 | orchestrator | GitCommit: fcd43222d6b07379a4be9786bda52438f0dd16a1 2025-11-23 00:13:45.589225 | orchestrator | runc: 2025-11-23 00:13:45.589236 | orchestrator | Version: 1.3.3 2025-11-23 00:13:45.589246 | orchestrator | GitCommit: v1.3.3-0-gd842d771 2025-11-23 00:13:45.589257 | orchestrator | docker-init: 2025-11-23 00:13:45.589268 | orchestrator | Version: 0.19.0 2025-11-23 00:13:45.589280 | orchestrator | GitCommit: de40ad0 2025-11-23 00:13:45.592386 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-11-23 00:13:45.602237 | orchestrator | + set -e 2025-11-23 00:13:45.602290 | orchestrator | + source /opt/manager-vars.sh 2025-11-23 00:13:45.602305 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-23 00:13:45.602319 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-23 00:13:45.602330 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-23 00:13:45.602341 | orchestrator | ++ CEPH_VERSION=reef 2025-11-23 00:13:45.602352 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-23 00:13:45.602364 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-23 00:13:45.602384 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-23 00:13:45.602395 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-23 00:13:45.602406 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-23 00:13:45.602416 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-23 00:13:45.602427 | orchestrator | ++ export ARA=false 2025-11-23 00:13:45.602438 | orchestrator | ++ ARA=false 2025-11-23 00:13:45.602449 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-23 00:13:45.602461 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-23 00:13:45.602471 | orchestrator | ++ export TEMPEST=true 2025-11-23 00:13:45.602482 | orchestrator | ++ TEMPEST=true 2025-11-23 00:13:45.602493 | orchestrator | ++ export IS_ZUUL=true 2025-11-23 00:13:45.602504 | orchestrator | ++ IS_ZUUL=true 2025-11-23 00:13:45.602515 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.118 2025-11-23 00:13:45.602526 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.118 2025-11-23 00:13:45.602537 | orchestrator | ++ export EXTERNAL_API=false 2025-11-23 00:13:45.602548 | orchestrator | ++ EXTERNAL_API=false 2025-11-23 00:13:45.602559 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-23 00:13:45.602569 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-23 00:13:45.602580 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-23 00:13:45.602591 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-23 00:13:45.602602 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-23 00:13:45.602613 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-23 00:13:45.602624 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-23 00:13:45.602635 | orchestrator | ++ export INTERACTIVE=false 2025-11-23 00:13:45.602646 | orchestrator | ++ INTERACTIVE=false 2025-11-23 00:13:45.602656 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-23 00:13:45.602672 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-23 00:13:45.602688 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-23 00:13:45.602849 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-23 00:13:45.602865 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-11-23 00:13:45.609899 | orchestrator | + set -e 2025-11-23 00:13:45.610417 | orchestrator | + VERSION=reef 2025-11-23 00:13:45.611043 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-23 00:13:45.617086 | orchestrator | + [[ -n ceph_version: reef ]] 2025-11-23 00:13:45.617128 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-11-23 00:13:45.622208 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-11-23 00:13:45.628912 | orchestrator | + set -e 2025-11-23 00:13:45.629008 | orchestrator | + VERSION=2024.2 2025-11-23 00:13:45.629965 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-23 00:13:45.633707 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-11-23 00:13:45.633734 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-11-23 00:13:45.639306 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-11-23 00:13:45.639749 | orchestrator | ++ semver latest 7.0.0 2025-11-23 00:13:45.701386 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-23 00:13:45.701488 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-23 00:13:45.701503 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-11-23 00:13:45.701523 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-11-23 00:13:45.790477 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-23 00:13:45.791563 | orchestrator | + source /opt/venv/bin/activate 2025-11-23 00:13:45.792826 | orchestrator | ++ deactivate nondestructive 2025-11-23 00:13:45.792899 | orchestrator | ++ '[' -n '' ']' 2025-11-23 00:13:45.792910 | orchestrator | ++ '[' -n '' ']' 2025-11-23 00:13:45.792917 | orchestrator | ++ hash -r 2025-11-23 00:13:45.793202 | orchestrator | ++ '[' -n '' ']' 2025-11-23 00:13:45.793277 | orchestrator | ++ unset VIRTUAL_ENV 2025-11-23 00:13:45.793300 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-11-23 00:13:45.793319 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-11-23 00:13:45.793332 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-11-23 00:13:45.793346 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-11-23 00:13:45.793357 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-11-23 00:13:45.793369 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-11-23 00:13:45.793381 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-23 00:13:45.793394 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-23 00:13:45.793414 | orchestrator | ++ export PATH 2025-11-23 00:13:45.793444 | orchestrator | ++ '[' -n '' ']' 2025-11-23 00:13:45.793463 | orchestrator | ++ '[' -z '' ']' 2025-11-23 00:13:45.793480 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-11-23 00:13:45.793497 | orchestrator | ++ PS1='(venv) ' 2025-11-23 00:13:45.793515 | orchestrator | ++ export PS1 2025-11-23 00:13:45.793534 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-11-23 00:13:45.793553 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-11-23 00:13:45.793573 | orchestrator | ++ hash -r 2025-11-23 00:13:45.793617 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-11-23 00:13:46.951945 | orchestrator | 2025-11-23 00:13:46.952054 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-11-23 00:13:46.952072 | orchestrator | 2025-11-23 00:13:46.952084 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-23 00:13:47.452145 | orchestrator | ok: [testbed-manager] 2025-11-23 00:13:47.452247 | orchestrator | 2025-11-23 00:13:47.452263 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-23 00:13:48.306340 | orchestrator | changed: [testbed-manager] 2025-11-23 00:13:48.306451 | orchestrator | 2025-11-23 00:13:48.306468 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-11-23 00:13:48.306482 | orchestrator | 2025-11-23 00:13:48.306493 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:13:50.444365 | orchestrator | ok: [testbed-manager] 2025-11-23 00:13:50.444496 | orchestrator | 2025-11-23 00:13:50.444517 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-11-23 00:13:50.490509 | orchestrator | ok: [testbed-manager] 2025-11-23 00:13:50.490632 | orchestrator | 2025-11-23 00:13:50.490652 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-11-23 00:13:50.917027 | orchestrator | changed: [testbed-manager] 2025-11-23 00:13:50.917139 | orchestrator | 2025-11-23 00:13:50.917164 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-11-23 00:13:50.957065 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:13:50.957134 | orchestrator | 2025-11-23 00:13:50.957148 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-23 00:13:51.259402 | orchestrator | changed: [testbed-manager] 2025-11-23 00:13:51.259514 | orchestrator | 2025-11-23 00:13:51.259536 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-11-23 00:13:51.308157 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:13:51.308260 | orchestrator | 2025-11-23 00:13:51.308277 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-11-23 00:13:51.616501 | orchestrator | ok: [testbed-manager] 2025-11-23 00:13:51.616623 | orchestrator | 2025-11-23 00:13:51.616639 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-11-23 00:13:51.735993 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:13:51.736103 | orchestrator | 2025-11-23 00:13:51.736117 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-11-23 00:13:51.736128 | orchestrator | 2025-11-23 00:13:51.736138 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:13:53.336397 | orchestrator | ok: [testbed-manager] 2025-11-23 00:13:53.336519 | orchestrator | 2025-11-23 00:13:53.336537 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-11-23 00:13:53.416993 | orchestrator | included: osism.services.traefik for testbed-manager 2025-11-23 00:13:53.417087 | orchestrator | 2025-11-23 00:13:53.417102 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-11-23 00:13:53.463539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-11-23 00:13:53.463633 | orchestrator | 2025-11-23 00:13:53.463648 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-11-23 00:13:54.479774 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-11-23 00:13:54.479921 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-11-23 00:13:54.479938 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-11-23 00:13:54.479949 | orchestrator | 2025-11-23 00:13:54.479961 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-11-23 00:13:56.132408 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-11-23 00:13:56.132489 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-11-23 00:13:56.132501 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-11-23 00:13:56.132507 | orchestrator | 2025-11-23 00:13:56.132514 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-11-23 00:13:56.720523 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-23 00:13:56.720622 | orchestrator | changed: [testbed-manager] 2025-11-23 00:13:56.720639 | orchestrator | 2025-11-23 00:13:56.720650 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-11-23 00:13:57.310560 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-23 00:13:57.310670 | orchestrator | changed: [testbed-manager] 2025-11-23 00:13:57.310693 | orchestrator | 2025-11-23 00:13:57.310709 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-11-23 00:13:57.365021 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:13:57.365120 | orchestrator | 2025-11-23 00:13:57.365138 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-11-23 00:13:57.695004 | orchestrator | ok: [testbed-manager] 2025-11-23 00:13:57.695105 | orchestrator | 2025-11-23 00:13:57.695122 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-11-23 00:13:57.755677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-11-23 00:13:57.755790 | orchestrator | 2025-11-23 00:13:57.755815 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-11-23 00:13:58.735440 | orchestrator | changed: [testbed-manager] 2025-11-23 00:13:58.735519 | orchestrator | 2025-11-23 00:13:58.735529 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-11-23 00:13:59.484041 | orchestrator | changed: [testbed-manager] 2025-11-23 00:13:59.484156 | orchestrator | 2025-11-23 00:13:59.484178 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-11-23 00:14:14.783275 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:14.783349 | orchestrator | 2025-11-23 00:14:14.783357 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-11-23 00:14:14.857174 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:14:14.857272 | orchestrator | 2025-11-23 00:14:14.857293 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-11-23 00:14:14.857307 | orchestrator | 2025-11-23 00:14:14.857315 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:14:16.580033 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:16.580122 | orchestrator | 2025-11-23 00:14:16.580165 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-11-23 00:14:16.702247 | orchestrator | included: osism.services.manager for testbed-manager 2025-11-23 00:14:16.702334 | orchestrator | 2025-11-23 00:14:16.702346 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-11-23 00:14:16.756061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-11-23 00:14:16.756141 | orchestrator | 2025-11-23 00:14:16.756152 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-11-23 00:14:18.899557 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:18.899663 | orchestrator | 2025-11-23 00:14:18.899681 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-11-23 00:14:18.950697 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:18.950803 | orchestrator | 2025-11-23 00:14:18.950821 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-11-23 00:14:19.071017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-11-23 00:14:19.071115 | orchestrator | 2025-11-23 00:14:19.071130 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-11-23 00:14:21.748319 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-11-23 00:14:21.748452 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-11-23 00:14:21.748469 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-11-23 00:14:21.748481 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-11-23 00:14:21.748492 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-11-23 00:14:21.748504 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-11-23 00:14:21.748515 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-11-23 00:14:21.748526 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-11-23 00:14:21.748538 | orchestrator | 2025-11-23 00:14:21.748550 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-11-23 00:14:23.076595 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:23.076697 | orchestrator | 2025-11-23 00:14:23.076715 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-11-23 00:14:23.652616 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:23.652718 | orchestrator | 2025-11-23 00:14:23.652736 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-11-23 00:14:23.729296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-11-23 00:14:23.729394 | orchestrator | 2025-11-23 00:14:23.729410 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-11-23 00:14:24.810311 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-11-23 00:14:24.810411 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-11-23 00:14:24.810428 | orchestrator | 2025-11-23 00:14:24.810442 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-11-23 00:14:25.369059 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:25.369163 | orchestrator | 2025-11-23 00:14:25.369181 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-11-23 00:14:25.415820 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:14:25.415959 | orchestrator | 2025-11-23 00:14:25.415975 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-11-23 00:14:25.484316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-11-23 00:14:25.484412 | orchestrator | 2025-11-23 00:14:25.484428 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-11-23 00:14:26.043567 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:26.043690 | orchestrator | 2025-11-23 00:14:26.043718 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-11-23 00:14:26.109289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-11-23 00:14:26.109417 | orchestrator | 2025-11-23 00:14:26.109435 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-11-23 00:14:27.370195 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-23 00:14:27.370298 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-23 00:14:27.370314 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:27.370328 | orchestrator | 2025-11-23 00:14:27.370340 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-11-23 00:14:27.932575 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:27.932685 | orchestrator | 2025-11-23 00:14:27.932708 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-11-23 00:14:27.973387 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:14:27.973479 | orchestrator | 2025-11-23 00:14:27.973494 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-11-23 00:14:28.059577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-11-23 00:14:28.059688 | orchestrator | 2025-11-23 00:14:28.059711 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-11-23 00:14:28.529959 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:28.530084 | orchestrator | 2025-11-23 00:14:28.530102 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-11-23 00:14:28.913473 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:28.913559 | orchestrator | 2025-11-23 00:14:28.913571 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-11-23 00:14:30.023667 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-11-23 00:14:30.023771 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-11-23 00:14:30.023786 | orchestrator | 2025-11-23 00:14:30.023815 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-11-23 00:14:30.589486 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:30.589602 | orchestrator | 2025-11-23 00:14:30.589631 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-11-23 00:14:30.952087 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:30.952190 | orchestrator | 2025-11-23 00:14:30.952206 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-11-23 00:14:31.290072 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:31.290172 | orchestrator | 2025-11-23 00:14:31.290186 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-11-23 00:14:31.338392 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:14:31.338480 | orchestrator | 2025-11-23 00:14:31.338495 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-11-23 00:14:31.408164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-11-23 00:14:31.408266 | orchestrator | 2025-11-23 00:14:31.408277 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-11-23 00:14:31.457709 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:31.457799 | orchestrator | 2025-11-23 00:14:31.457812 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-11-23 00:14:33.294821 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-11-23 00:14:33.294994 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-11-23 00:14:33.295013 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-11-23 00:14:33.295026 | orchestrator | 2025-11-23 00:14:33.295039 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-11-23 00:14:33.879352 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:33.879439 | orchestrator | 2025-11-23 00:14:33.879455 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-11-23 00:14:34.510774 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:34.510915 | orchestrator | 2025-11-23 00:14:34.510930 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-11-23 00:14:35.152955 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:35.153054 | orchestrator | 2025-11-23 00:14:35.153070 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-11-23 00:14:35.217183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-11-23 00:14:35.217310 | orchestrator | 2025-11-23 00:14:35.217323 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-11-23 00:14:35.256623 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:35.256728 | orchestrator | 2025-11-23 00:14:35.256746 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-11-23 00:14:35.902399 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-11-23 00:14:35.902491 | orchestrator | 2025-11-23 00:14:35.902505 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-11-23 00:14:35.985070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-11-23 00:14:35.985192 | orchestrator | 2025-11-23 00:14:35.985210 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-11-23 00:14:36.635237 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:36.635310 | orchestrator | 2025-11-23 00:14:36.635318 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-11-23 00:14:37.175267 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:37.175398 | orchestrator | 2025-11-23 00:14:37.175425 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-11-23 00:14:37.231659 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:14:37.231748 | orchestrator | 2025-11-23 00:14:37.231761 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-11-23 00:14:37.287523 | orchestrator | ok: [testbed-manager] 2025-11-23 00:14:37.287622 | orchestrator | 2025-11-23 00:14:37.287638 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-11-23 00:14:38.023546 | orchestrator | changed: [testbed-manager] 2025-11-23 00:14:38.023662 | orchestrator | 2025-11-23 00:14:38.023675 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-11-23 00:15:48.767262 | orchestrator | changed: [testbed-manager] 2025-11-23 00:15:48.767359 | orchestrator | 2025-11-23 00:15:48.767371 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-11-23 00:15:49.623616 | orchestrator | ok: [testbed-manager] 2025-11-23 00:15:49.623719 | orchestrator | 2025-11-23 00:15:49.623738 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-11-23 00:15:49.734210 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:15:49.734285 | orchestrator | 2025-11-23 00:15:49.734295 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-11-23 00:15:51.904436 | orchestrator | changed: [testbed-manager] 2025-11-23 00:15:51.904540 | orchestrator | 2025-11-23 00:15:51.904558 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-11-23 00:15:51.952246 | orchestrator | ok: [testbed-manager] 2025-11-23 00:15:51.952348 | orchestrator | 2025-11-23 00:15:51.952365 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-23 00:15:51.952378 | orchestrator | 2025-11-23 00:15:51.952389 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-11-23 00:15:51.993148 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:15:51.993236 | orchestrator | 2025-11-23 00:15:51.993251 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-11-23 00:16:52.053645 | orchestrator | Pausing for 60 seconds 2025-11-23 00:16:52.053848 | orchestrator | changed: [testbed-manager] 2025-11-23 00:16:52.053859 | orchestrator | 2025-11-23 00:16:52.053867 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-11-23 00:16:55.037479 | orchestrator | changed: [testbed-manager] 2025-11-23 00:16:55.037560 | orchestrator | 2025-11-23 00:16:55.037570 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-11-23 00:17:36.488782 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-11-23 00:17:36.488905 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-11-23 00:17:36.488924 | orchestrator | changed: [testbed-manager] 2025-11-23 00:17:36.488966 | orchestrator | 2025-11-23 00:17:36.488980 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-11-23 00:17:45.531844 | orchestrator | changed: [testbed-manager] 2025-11-23 00:17:45.531980 | orchestrator | 2025-11-23 00:17:45.532009 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-11-23 00:17:45.606367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-11-23 00:17:45.606483 | orchestrator | 2025-11-23 00:17:45.606510 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-23 00:17:45.606531 | orchestrator | 2025-11-23 00:17:45.606551 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-11-23 00:17:45.661250 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:17:45.661361 | orchestrator | 2025-11-23 00:17:45.661386 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-11-23 00:17:45.726324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-11-23 00:17:45.726442 | orchestrator | 2025-11-23 00:17:45.726468 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-11-23 00:17:46.424533 | orchestrator | changed: [testbed-manager] 2025-11-23 00:17:46.424638 | orchestrator | 2025-11-23 00:17:46.424714 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-11-23 00:17:49.458412 | orchestrator | ok: [testbed-manager] 2025-11-23 00:17:49.458513 | orchestrator | 2025-11-23 00:17:49.458529 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-11-23 00:17:49.532283 | orchestrator | ok: [testbed-manager] => { 2025-11-23 00:17:49.532391 | orchestrator | "version_check_result.stdout_lines": [ 2025-11-23 00:17:49.532406 | orchestrator | "=== OSISM Container Version Check ===", 2025-11-23 00:17:49.532417 | orchestrator | "Checking running containers against expected versions...", 2025-11-23 00:17:49.532429 | orchestrator | "", 2025-11-23 00:17:49.532439 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-11-23 00:17:49.532449 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-23 00:17:49.532460 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.532470 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-23 00:17:49.532480 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.532490 | orchestrator | "", 2025-11-23 00:17:49.532500 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-11-23 00:17:49.532510 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-11-23 00:17:49.532519 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.532529 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-11-23 00:17:49.532539 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.532549 | orchestrator | "", 2025-11-23 00:17:49.532558 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-11-23 00:17:49.532568 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-23 00:17:49.532578 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.532587 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-23 00:17:49.532597 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.532607 | orchestrator | "", 2025-11-23 00:17:49.532616 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-11-23 00:17:49.532626 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-23 00:17:49.532637 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.532695 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-23 00:17:49.532713 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.532730 | orchestrator | "", 2025-11-23 00:17:49.532745 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-11-23 00:17:49.532761 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-23 00:17:49.532802 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.532819 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-23 00:17:49.532834 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.532851 | orchestrator | "", 2025-11-23 00:17:49.532867 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-11-23 00:17:49.532884 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.532902 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.532919 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.532936 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.532953 | orchestrator | "", 2025-11-23 00:17:49.532971 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-11-23 00:17:49.532987 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-23 00:17:49.533004 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533021 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-23 00:17:49.533036 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533053 | orchestrator | "", 2025-11-23 00:17:49.533071 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-11-23 00:17:49.533097 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-23 00:17:49.533112 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533124 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-23 00:17:49.533136 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533146 | orchestrator | "", 2025-11-23 00:17:49.533157 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-11-23 00:17:49.533168 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-11-23 00:17:49.533179 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533215 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-11-23 00:17:49.533225 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533235 | orchestrator | "", 2025-11-23 00:17:49.533244 | orchestrator | "Checking service: redis (Redis Cache)", 2025-11-23 00:17:49.533254 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-23 00:17:49.533264 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533273 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-23 00:17:49.533283 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533292 | orchestrator | "", 2025-11-23 00:17:49.533302 | orchestrator | "Checking service: api (OSISM API Service)", 2025-11-23 00:17:49.533311 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533321 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533330 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533339 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533349 | orchestrator | "", 2025-11-23 00:17:49.533358 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-11-23 00:17:49.533368 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533378 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533389 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533399 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533410 | orchestrator | "", 2025-11-23 00:17:49.533420 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-11-23 00:17:49.533431 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533441 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533452 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533462 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533473 | orchestrator | "", 2025-11-23 00:17:49.533484 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-11-23 00:17:49.533494 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533505 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533515 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533537 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533548 | orchestrator | "", 2025-11-23 00:17:49.533559 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-11-23 00:17:49.533589 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533600 | orchestrator | " Enabled: true", 2025-11-23 00:17:49.533611 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-23 00:17:49.533621 | orchestrator | " Status: ✅ MATCH", 2025-11-23 00:17:49.533632 | orchestrator | "", 2025-11-23 00:17:49.533643 | orchestrator | "=== Summary ===", 2025-11-23 00:17:49.533675 | orchestrator | "Errors (version mismatches): 0", 2025-11-23 00:17:49.533686 | orchestrator | "Warnings (expected containers not running): 0", 2025-11-23 00:17:49.533697 | orchestrator | "", 2025-11-23 00:17:49.533708 | orchestrator | "✅ All running containers match expected versions!" 2025-11-23 00:17:49.533719 | orchestrator | ] 2025-11-23 00:17:49.533730 | orchestrator | } 2025-11-23 00:17:49.533741 | orchestrator | 2025-11-23 00:17:49.533752 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-11-23 00:17:49.589854 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:17:49.589950 | orchestrator | 2025-11-23 00:17:49.589967 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:17:49.589982 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-11-23 00:17:49.589994 | orchestrator | 2025-11-23 00:17:49.656892 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-23 00:17:49.656983 | orchestrator | + deactivate 2025-11-23 00:17:49.656997 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-11-23 00:17:49.657010 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-23 00:17:49.657021 | orchestrator | + export PATH 2025-11-23 00:17:49.657032 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-11-23 00:17:49.657044 | orchestrator | + '[' -n '' ']' 2025-11-23 00:17:49.657055 | orchestrator | + hash -r 2025-11-23 00:17:49.657066 | orchestrator | + '[' -n '' ']' 2025-11-23 00:17:49.657077 | orchestrator | + unset VIRTUAL_ENV 2025-11-23 00:17:49.657087 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-11-23 00:17:49.657098 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-11-23 00:17:49.657109 | orchestrator | + unset -f deactivate 2025-11-23 00:17:49.657120 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-11-23 00:17:49.662769 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-23 00:17:49.662836 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-23 00:17:49.662849 | orchestrator | + local max_attempts=60 2025-11-23 00:17:49.662861 | orchestrator | + local name=ceph-ansible 2025-11-23 00:17:49.662873 | orchestrator | + local attempt_num=1 2025-11-23 00:17:49.663964 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:17:49.702076 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:17:49.702164 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-23 00:17:49.702178 | orchestrator | + local max_attempts=60 2025-11-23 00:17:49.702191 | orchestrator | + local name=kolla-ansible 2025-11-23 00:17:49.702202 | orchestrator | + local attempt_num=1 2025-11-23 00:17:49.702213 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-23 00:17:49.726548 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:17:49.726622 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-23 00:17:49.726631 | orchestrator | + local max_attempts=60 2025-11-23 00:17:49.726641 | orchestrator | + local name=osism-ansible 2025-11-23 00:17:49.726685 | orchestrator | + local attempt_num=1 2025-11-23 00:17:49.727335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-23 00:17:49.767500 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:17:49.767596 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-23 00:17:49.767612 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-23 00:17:50.432167 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-11-23 00:17:50.607924 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-11-23 00:17:50.608091 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608121 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608142 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-11-23 00:17:50.608163 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-11-23 00:17:50.608181 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608221 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608243 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 55 seconds (healthy) 2025-11-23 00:17:50.608261 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608281 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-11-23 00:17:50.608294 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608309 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-11-23 00:17:50.608328 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608365 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-11-23 00:17:50.608383 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.608394 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-11-23 00:17:50.614756 | orchestrator | ++ semver latest 7.0.0 2025-11-23 00:17:50.668572 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-23 00:17:50.668679 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-23 00:17:50.668691 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-11-23 00:17:50.672857 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-11-23 00:18:02.647107 | orchestrator | 2025-11-23 00:18:02 | INFO  | Task 22f0501d-6f1a-41e1-91c6-25ab781aa271 (resolvconf) was prepared for execution. 2025-11-23 00:18:02.647251 | orchestrator | 2025-11-23 00:18:02 | INFO  | It takes a moment until task 22f0501d-6f1a-41e1-91c6-25ab781aa271 (resolvconf) has been started and output is visible here. 2025-11-23 00:18:15.753357 | orchestrator | 2025-11-23 00:18:15.753472 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-11-23 00:18:15.753489 | orchestrator | 2025-11-23 00:18:15.753502 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:18:15.753513 | orchestrator | Sunday 23 November 2025 00:18:06 +0000 (0:00:00.102) 0:00:00.102 ******* 2025-11-23 00:18:15.753525 | orchestrator | ok: [testbed-manager] 2025-11-23 00:18:15.753537 | orchestrator | 2025-11-23 00:18:15.753548 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-23 00:18:15.753560 | orchestrator | Sunday 23 November 2025 00:18:09 +0000 (0:00:03.401) 0:00:03.503 ******* 2025-11-23 00:18:15.753571 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:18:15.753582 | orchestrator | 2025-11-23 00:18:15.753593 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-23 00:18:15.753604 | orchestrator | Sunday 23 November 2025 00:18:09 +0000 (0:00:00.068) 0:00:03.571 ******* 2025-11-23 00:18:15.753627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-11-23 00:18:15.753702 | orchestrator | 2025-11-23 00:18:15.753713 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-23 00:18:15.753724 | orchestrator | Sunday 23 November 2025 00:18:09 +0000 (0:00:00.078) 0:00:03.650 ******* 2025-11-23 00:18:15.753735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-11-23 00:18:15.753746 | orchestrator | 2025-11-23 00:18:15.753757 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-23 00:18:15.753768 | orchestrator | Sunday 23 November 2025 00:18:09 +0000 (0:00:00.063) 0:00:03.714 ******* 2025-11-23 00:18:15.753779 | orchestrator | ok: [testbed-manager] 2025-11-23 00:18:15.753790 | orchestrator | 2025-11-23 00:18:15.753801 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-23 00:18:15.753812 | orchestrator | Sunday 23 November 2025 00:18:10 +0000 (0:00:00.869) 0:00:04.583 ******* 2025-11-23 00:18:15.753823 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:18:15.753834 | orchestrator | 2025-11-23 00:18:15.753845 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-23 00:18:15.753857 | orchestrator | Sunday 23 November 2025 00:18:10 +0000 (0:00:00.068) 0:00:04.652 ******* 2025-11-23 00:18:15.753868 | orchestrator | ok: [testbed-manager] 2025-11-23 00:18:15.753878 | orchestrator | 2025-11-23 00:18:15.753891 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-23 00:18:15.753904 | orchestrator | Sunday 23 November 2025 00:18:11 +0000 (0:00:00.445) 0:00:05.098 ******* 2025-11-23 00:18:15.753916 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:18:15.753929 | orchestrator | 2025-11-23 00:18:15.753941 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-23 00:18:15.753954 | orchestrator | Sunday 23 November 2025 00:18:11 +0000 (0:00:00.072) 0:00:05.171 ******* 2025-11-23 00:18:15.753967 | orchestrator | changed: [testbed-manager] 2025-11-23 00:18:15.753979 | orchestrator | 2025-11-23 00:18:15.753991 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-23 00:18:15.754004 | orchestrator | Sunday 23 November 2025 00:18:11 +0000 (0:00:00.467) 0:00:05.638 ******* 2025-11-23 00:18:15.754071 | orchestrator | changed: [testbed-manager] 2025-11-23 00:18:15.754084 | orchestrator | 2025-11-23 00:18:15.754097 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-23 00:18:15.754110 | orchestrator | Sunday 23 November 2025 00:18:12 +0000 (0:00:00.998) 0:00:06.637 ******* 2025-11-23 00:18:15.754122 | orchestrator | ok: [testbed-manager] 2025-11-23 00:18:15.754134 | orchestrator | 2025-11-23 00:18:15.754146 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-23 00:18:15.754178 | orchestrator | Sunday 23 November 2025 00:18:14 +0000 (0:00:01.866) 0:00:08.503 ******* 2025-11-23 00:18:15.754191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-11-23 00:18:15.754204 | orchestrator | 2025-11-23 00:18:15.754216 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-23 00:18:15.754228 | orchestrator | Sunday 23 November 2025 00:18:14 +0000 (0:00:00.066) 0:00:08.569 ******* 2025-11-23 00:18:15.754240 | orchestrator | changed: [testbed-manager] 2025-11-23 00:18:15.754252 | orchestrator | 2025-11-23 00:18:15.754263 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:18:15.754275 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-23 00:18:15.754286 | orchestrator | 2025-11-23 00:18:15.754298 | orchestrator | 2025-11-23 00:18:15.754308 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:18:15.754319 | orchestrator | Sunday 23 November 2025 00:18:15 +0000 (0:00:01.034) 0:00:09.604 ******* 2025-11-23 00:18:15.754330 | orchestrator | =============================================================================== 2025-11-23 00:18:15.754341 | orchestrator | Gathering Facts --------------------------------------------------------- 3.40s 2025-11-23 00:18:15.754352 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.87s 2025-11-23 00:18:15.754362 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.03s 2025-11-23 00:18:15.754373 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.00s 2025-11-23 00:18:15.754384 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.87s 2025-11-23 00:18:15.754394 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.47s 2025-11-23 00:18:15.754423 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.45s 2025-11-23 00:18:15.754434 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-11-23 00:18:15.754451 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-11-23 00:18:15.754463 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-11-23 00:18:15.754474 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-11-23 00:18:15.754485 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-11-23 00:18:15.754496 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-11-23 00:18:15.941912 | orchestrator | + osism apply sshconfig 2025-11-23 00:18:27.728788 | orchestrator | 2025-11-23 00:18:27 | INFO  | Task 6b1c8bbd-7a93-4304-846d-9703fe9af702 (sshconfig) was prepared for execution. 2025-11-23 00:18:27.728902 | orchestrator | 2025-11-23 00:18:27 | INFO  | It takes a moment until task 6b1c8bbd-7a93-4304-846d-9703fe9af702 (sshconfig) has been started and output is visible here. 2025-11-23 00:18:37.956360 | orchestrator | 2025-11-23 00:18:37.956518 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-11-23 00:18:37.956537 | orchestrator | 2025-11-23 00:18:37.956549 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-11-23 00:18:37.956561 | orchestrator | Sunday 23 November 2025 00:18:31 +0000 (0:00:00.118) 0:00:00.118 ******* 2025-11-23 00:18:37.956572 | orchestrator | ok: [testbed-manager] 2025-11-23 00:18:37.956584 | orchestrator | 2025-11-23 00:18:37.956596 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-11-23 00:18:37.956607 | orchestrator | Sunday 23 November 2025 00:18:31 +0000 (0:00:00.484) 0:00:00.602 ******* 2025-11-23 00:18:37.956672 | orchestrator | changed: [testbed-manager] 2025-11-23 00:18:37.956684 | orchestrator | 2025-11-23 00:18:37.956695 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-11-23 00:18:37.956739 | orchestrator | Sunday 23 November 2025 00:18:32 +0000 (0:00:00.456) 0:00:01.059 ******* 2025-11-23 00:18:37.956751 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-11-23 00:18:37.956762 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-11-23 00:18:37.956773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-11-23 00:18:37.956784 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-11-23 00:18:37.956794 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-11-23 00:18:37.956820 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-11-23 00:18:37.956840 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-11-23 00:18:37.956852 | orchestrator | 2025-11-23 00:18:37.956863 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-11-23 00:18:37.956876 | orchestrator | Sunday 23 November 2025 00:18:37 +0000 (0:00:05.051) 0:00:06.110 ******* 2025-11-23 00:18:37.956888 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:18:37.956900 | orchestrator | 2025-11-23 00:18:37.956912 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-11-23 00:18:37.956924 | orchestrator | Sunday 23 November 2025 00:18:37 +0000 (0:00:00.070) 0:00:06.180 ******* 2025-11-23 00:18:37.956936 | orchestrator | changed: [testbed-manager] 2025-11-23 00:18:37.956948 | orchestrator | 2025-11-23 00:18:37.956960 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:18:37.956974 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:18:37.956987 | orchestrator | 2025-11-23 00:18:37.957000 | orchestrator | 2025-11-23 00:18:37.957012 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:18:37.957025 | orchestrator | Sunday 23 November 2025 00:18:37 +0000 (0:00:00.505) 0:00:06.685 ******* 2025-11-23 00:18:37.957037 | orchestrator | =============================================================================== 2025-11-23 00:18:37.957050 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.05s 2025-11-23 00:18:37.957063 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2025-11-23 00:18:37.957075 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2025-11-23 00:18:37.957088 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2025-11-23 00:18:37.957100 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-11-23 00:18:38.134825 | orchestrator | + osism apply known-hosts 2025-11-23 00:18:49.965307 | orchestrator | 2025-11-23 00:18:49 | INFO  | Task c46b6548-4450-4360-b1f0-2337e01301f0 (known-hosts) was prepared for execution. 2025-11-23 00:18:49.965453 | orchestrator | 2025-11-23 00:18:49 | INFO  | It takes a moment until task c46b6548-4450-4360-b1f0-2337e01301f0 (known-hosts) has been started and output is visible here. 2025-11-23 00:19:05.182308 | orchestrator | 2025-11-23 00:19:05.182422 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-11-23 00:19:05.182439 | orchestrator | 2025-11-23 00:19:05.182451 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-11-23 00:19:05.182462 | orchestrator | Sunday 23 November 2025 00:18:53 +0000 (0:00:00.143) 0:00:00.143 ******* 2025-11-23 00:19:05.182474 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-23 00:19:05.182486 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-23 00:19:05.182497 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-23 00:19:05.182507 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-23 00:19:05.182528 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-23 00:19:05.182539 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-23 00:19:05.182550 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-23 00:19:05.182580 | orchestrator | 2025-11-23 00:19:05.182592 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-11-23 00:19:05.182687 | orchestrator | Sunday 23 November 2025 00:18:59 +0000 (0:00:05.657) 0:00:05.801 ******* 2025-11-23 00:19:05.182700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-23 00:19:05.182713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-23 00:19:05.182724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-23 00:19:05.182734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-23 00:19:05.182745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-23 00:19:05.182757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-23 00:19:05.182768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-23 00:19:05.182779 | orchestrator | 2025-11-23 00:19:05.182789 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:05.182800 | orchestrator | Sunday 23 November 2025 00:18:59 +0000 (0:00:00.145) 0:00:05.946 ******* 2025-11-23 00:19:05.182815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq9ufIQRxb8pMoc6WAH9zkoCgGUju1CU3VfMv9agNHQchAkWdsoS29br0vKjpCurtHJx4TP46C0YMWBdNuXmudaDGor041IcfnVBzPT8jo1v8bxh+egIcSp8ow1a2QgpSe6QTwoEMHI7+vNyzzw8pmoLqOnSb7GGpgrJs1w6wnWyTuc3+lSCsV4cNkUaKw5j1S9b7tbdimNZ66An5WzXH/3lS6fTzZ8fhT2WsEuB+o8Te378RrgHFqyvt40A3NcqzgK8VHrofQYqK0+ehp6qd6106QqSh14dJb/xpGH1cOhNSAeOb3LbMXnNZ0XYh6eTcNLfy6cpa2zQ39oW3XYRAJZ1O7wHkcxcSS/zZOQ4HP27qoRvJhqBSfPzcHxvBJ8VnI6pthjls+A+xDmJJkhHXxCp8z37yvRA+/bx5xw9fiNRsL0sAcEa5uiavt7J2ERw9yZ04Q/tjX1Upzzm2fOsLhPUmcjGhfjLEshQmwtL0J5wZcta9x9ndxbFe8f/h5bQU=) 2025-11-23 00:19:05.182830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL6ueXwPhrPvG9ywSfEHh+RHSlJKDCK//iH4ueRlesAcD688JdNFYKIoPz0ryOLriJdiPJ1lKZ9GOCLxdy+uReo=) 2025-11-23 00:19:05.182843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmmNI15rqAR6mCfchICMCD6c+8GaYF8Vptk12gSDgdw) 2025-11-23 00:19:05.182855 | orchestrator | 2025-11-23 00:19:05.182865 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:05.182876 | orchestrator | Sunday 23 November 2025 00:19:00 +0000 (0:00:01.070) 0:00:07.016 ******* 2025-11-23 00:19:05.182887 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwuRMKaqb2vIaPNkMULx3np3NbfpeGQt7pUC/HUBNENLT3Z4ufN3uPV5jcFlH9WWpyAsnMuc3d6Qfy3MavJIVk=) 2025-11-23 00:19:05.182926 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDO7J8InKPNx6x/mgNyeGbNWuiA4EuakpwYHp1FzcUofnEGnnnreE8Y+NxbmkwteNJtA5wUDmoqAQ9qnvKB77Wtgjy1Nc9bDZiklXoTBaZaTVn2hwSQUt6U6D/wtVvniyT/ezSNMwNDJrMdwDPydpM0nVYfNlxQnCL5kCT1rbwfp/hXT351AvqQTEGQjPB+4tfq+M7eA34+xhXPZCuS6p8KqAyP7bRMVXis08NZWSI8nmobR3RJQjpdFq/WgggE2s7vDkQ8bJp9sosbNT+l8xEpC5f25IbkktP4uQSVxw9+CMGJJ7RooqrvVWHpClKOWZQilmpRiGLzlTNwrRyCQW3a/OrehvkO3i2XmWjoxbt1nrSLwdlX9QUfJdgS+mTeqECZRrEYQcJhAIY8OcXhDbWObTD6GsnJ6cJgodA+xEQfPUrmeyMcZ7R+XvgjjmfLdfRbd5VLU4qjTo8e8p+dQlGr4PmxD5TAo5tTtGj4WlrKaiQKwOpHri810rXDpOwj/s=) 2025-11-23 00:19:05.182947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEVBQU/xRzaDfl6cctO+k2vMmNMwdgwujD4tMAy5ooPE) 2025-11-23 00:19:05.182958 | orchestrator | 2025-11-23 00:19:05.182969 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:05.182980 | orchestrator | Sunday 23 November 2025 00:19:01 +0000 (0:00:00.885) 0:00:07.902 ******* 2025-11-23 00:19:05.182991 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMcKc5MmCkmJ0JbcrCiAVwy0yyHpbCjms2F6hZCJ8nw1) 2025-11-23 00:19:05.183065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/+2vVqncFlqX/DbJsSDgiDt94n4MrzDRXl/O748OL6240Hg0X5LQpz4ArwJH8bfPrq4jrUbkv67VSTvwhg4iWimc0Rfz3Ningd+E9T+G7KqNwAGvAEkkja/1Xn1whm4CmX0LHBbDpIG8HrI7VP+Me9kFFwmBtTouCpdNOByWLRZVd7ciSbMnPOagx3lantwVM4qss1bbr8noWRk3ee2yPPfJLwsG9Agb0L8qV4bpQsUE8tDs41nQ6Ju5s8SINF+SypZHjZCPLtCzjvJzCj0zNVUYOmkEH4cb3bZB8CCKXvDoyKCE9cAyEYk52yoIWDSY+jKJdxHPgVjWHS6MfIOFD53lqcUhQA12LSCwny7YxKab99snsyT1olJcHt/bKgfO8lf7awt4WGTJhvmd+gMLU/86ZkvAXxT+Os87F7yZw6+XtLsSiD/Rq1nID7KSKUbnS63no5qh9Y3ceE2RGvI+IEsLkZXGOV7V4xvSKzUHUYMLubP6S1NTN7VVEB8AXs8s=) 2025-11-23 00:19:05.183077 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL/O6/iqxzIrvvZKyEaThCHLKZc5Wwn0idXDQ/qTTgBFjSgZvLkR9VbclnuGdEzj1HRwAESW0Q+AZqEm5T0hNDU=) 2025-11-23 00:19:05.183088 | orchestrator | 2025-11-23 00:19:05.183099 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:05.183110 | orchestrator | Sunday 23 November 2025 00:19:02 +0000 (0:00:00.944) 0:00:08.846 ******* 2025-11-23 00:19:05.183121 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbsxQWFDCv5Exof4GKWB+uyG0EcsRElDoYBTeja7sEp4vJSo5fdUtCevwh7h9Jd8cr01933bFNY+2sosYYaug4=) 2025-11-23 00:19:05.183132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCySQmsz04c5zvM51H0M+CHJtlGOLyPBnv3Mz2yiPjiRgHdIERnoUMaKK5Dy1hdpKlddWzH3O2/oFBbEDX+txWwLon8r6ZH1KXti6tSzfsfTgVnhkUNlBfdN6N1rsvDeVCqBY5HPiFpF0qZN38EQUu5XJaRjgofTALC/DjSGi4l1uVedj1bPv4cOZQddtM/XfBNwJ1koJ4FEsFWR/U6kLZuV0S875QKF9VOaWDIYHvvfH57nyIuafUoSu6FiC/lt9zRJEw68WfS1KIP1D+jldck6OHRmOIjfQQ11/yqKYsmXNSVEhMEtgX3y6sMj8JRljzldeq59A05CQslFyAA9RikihZBE4VG1CwM+EFkO8Q7EhZtM3c+Wlo4+2SH23PscMPy7D+ttrJxAbjOivzh6gZQ7TwmKBXdXlnD4PbhEdshLxaadHf2I5cMLRh+qx3b4eVTomxPCqeN9Lt4EFsZkF2jEPeBvEpDom97IovyGBx9tV8v1sz5UJ2U8LNE/jIbhBM=) 2025-11-23 00:19:05.183143 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrrsa1LtJ7ebSVkvsAZSQ65WYQJ8Dw3bMAPy4X0j/E7) 2025-11-23 00:19:05.183154 | orchestrator | 2025-11-23 00:19:05.183165 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:05.183176 | orchestrator | Sunday 23 November 2025 00:19:03 +0000 (0:00:00.963) 0:00:09.810 ******* 2025-11-23 00:19:05.183187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwnDucXEtxcmnrqwpOYClD7Ygb1OnHAx58fJFrPqqGdfCxtFiamahd/+VwoDydnd/pB18t+x1aD2wWdj5AFTrielTVqSaxvgVA/F64qn0vPFgne42mEylEcCEhaOQATY796rFWEZSqMfLRdgnVYRjJWn0m/f1UMh344r+KLw7x4Qs8YJci3KVn0mPD36CmL6JKDPft2iS9k+NR7AeL2UE8BE9k2+KFp3COgpsEHUUsymILQRV1cpqQC03ocU2F44Q3AhKIyBokyFy+3Y5KSj5s+amSz/QWhxTEj1/3LLLCDaiO4UFuD0JmgCpPpNRa7e7NOSn0YaE99raQqZqpFwxeSQpaim5R9cf0fHIS68RXxoPFqrs0o3YxY4q4UUHCftf1OUnaksDeYBoTNjgD6bab3bN12HSBkKX0ho1qIJkpCVR2lk4olmneKdPxJuNzUlgxkZv1ld1vWCBF/CZCbwNEPqaJ7aQBu9moYe1Wal+ZBmzFvC/0bmYPf8VyW1amQak=) 2025-11-23 00:19:05.183198 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICkH2BuBEtOwGFg/ihgDD0iRAPfbjHKPje23afAJjrEP) 2025-11-23 00:19:05.183215 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEdgVvCOaN8n9iJiryk+Fhiz0bftH/YpaX5TlEiEGeQDsO52S/c727ni61/fShgA7DzldnRnM2P3bDnFxxivMFM=) 2025-11-23 00:19:05.183226 | orchestrator | 2025-11-23 00:19:05.183237 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:05.183248 | orchestrator | Sunday 23 November 2025 00:19:04 +0000 (0:00:00.929) 0:00:10.739 ******* 2025-11-23 00:19:05.183268 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1ZPENK3plBtESJUHqvljssYZgTjVv5yDh35QXeW16P24ZbUrYfHymobEShIPwDK9xsZbmoOTLK+7nLMjmuzahea2zor4fXISW/RSCmULEgQphgIOtqwWOjO8qrP/+rT6bVlL1QkYErcani5wZafrYQxqVjYhtswZXr4s08yWTSO+mi4DL8KNHGYyVZs3bismYWgyOv+/wkUnoZW5kpQGI7TTi70ogiERzeg8r72zxIZUrnMhlznX7uc8MUiA1R4vSvpYxUd/UWbDeeUIM/1y3rxTW92Mqr7G7HIVwZqpp6+J62j3toRMZ3z5P+1uhmm9y6ZmDwzTZdBpsWfKmy2IH96pWV50dSPS7E40e3gOxLjS+eIK+BFNeDjlmiFJVBAyU+sXxai9iTYUREKrH+4g1PWwXeY8uHtzyu77ZzrkqtpXZT2DyoJbjk8Ase4mtsGPzNIhpUPWRT2osqWJXG3i2riyY8DrKbVB2cigwxjcT+AXxd8uhGdimL6dIRKfYMhc=) 2025-11-23 00:19:15.071765 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHMsqT+NJ6rdzD6nOaxzBq1fqqF6ZylXV2qAjZEiu7tJaOtu9YsBRznGIWLMCHuSkENhr5jBmvwa1oa1MV58gYc=) 2025-11-23 00:19:15.071870 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPQkmlEWEdQtT2xs6arhzeqTGWg17s/6MAqkokdwkg+J) 2025-11-23 00:19:15.071888 | orchestrator | 2025-11-23 00:19:15.071902 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:15.071915 | orchestrator | Sunday 23 November 2025 00:19:05 +0000 (0:00:00.914) 0:00:11.653 ******* 2025-11-23 00:19:15.071926 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHOsfTWvbgeezl/e0kFSrgfa1ZCarQfFu5u2gELj5qPG) 2025-11-23 00:19:15.071941 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI1cc+wF9DjM6gyddXnbsOcGboPPgnKx6FaPdCfJ8BMsi6ES7LbCjHKOK4UvpAEuhIrb4ONoe+3LiWuOb9uymmj1F3Htx+u9vUCwDJZ31C1fGYEDcKUftpcEtCM4PeTRT6mV525fjAJhtamQ4/4ZUUa8TiUsd+w/7ZsIEqGbpijvMtbdzLttbil2atI7kDUg4OTRk3Il18MDobWLNTSHDW2fpxx9kAv6LTt9PVnwMJvGKa1UiItZO731BojQ+w0ajrVPFB+df0rcval3VBVgv9bZAfEYimgBFG2PejKQ2fk1lRqPlncIv1UrKlqxc27j7e6wtDtWU6pJFFylt01/w7cFGXEBCboJ/IS6yxnKw25qZpAaOPCXoXHwwsM19muchwLNeEldi1k3lxO+YrWBj8z9uEQTc8U/Tu8k9QhFEsB3A6+ARS71s5R1wJ/CkBzuSSz20s5OPveFQKFyfvl+IV1whkUJperJqJkbWFAkYJrvjgCprtG7R8403l5zcRwpM=) 2025-11-23 00:19:15.071957 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL3r6orVTHGVLmZtiqSIyxa2ICAdZ0Z+u8Yu0sVC85p6p8rTF27WsNPpjggvrXw1guhN6hrKcMP1fvTWGU4s6kE=) 2025-11-23 00:19:15.071972 | orchestrator | 2025-11-23 00:19:15.071982 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-11-23 00:19:15.071994 | orchestrator | Sunday 23 November 2025 00:19:06 +0000 (0:00:00.928) 0:00:12.582 ******* 2025-11-23 00:19:15.072026 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-23 00:19:15.072038 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-23 00:19:15.072048 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-23 00:19:15.072059 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-23 00:19:15.072071 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-23 00:19:15.072083 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-23 00:19:15.072097 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-23 00:19:15.072108 | orchestrator | 2025-11-23 00:19:15.072118 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-11-23 00:19:15.072133 | orchestrator | Sunday 23 November 2025 00:19:11 +0000 (0:00:04.936) 0:00:17.518 ******* 2025-11-23 00:19:15.072151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-23 00:19:15.072193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-23 00:19:15.072207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-23 00:19:15.072220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-23 00:19:15.072230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-23 00:19:15.072241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-23 00:19:15.072251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-23 00:19:15.072262 | orchestrator | 2025-11-23 00:19:15.072272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:15.072290 | orchestrator | Sunday 23 November 2025 00:19:11 +0000 (0:00:00.151) 0:00:17.670 ******* 2025-11-23 00:19:15.072311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmmNI15rqAR6mCfchICMCD6c+8GaYF8Vptk12gSDgdw) 2025-11-23 00:19:15.072383 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq9ufIQRxb8pMoc6WAH9zkoCgGUju1CU3VfMv9agNHQchAkWdsoS29br0vKjpCurtHJx4TP46C0YMWBdNuXmudaDGor041IcfnVBzPT8jo1v8bxh+egIcSp8ow1a2QgpSe6QTwoEMHI7+vNyzzw8pmoLqOnSb7GGpgrJs1w6wnWyTuc3+lSCsV4cNkUaKw5j1S9b7tbdimNZ66An5WzXH/3lS6fTzZ8fhT2WsEuB+o8Te378RrgHFqyvt40A3NcqzgK8VHrofQYqK0+ehp6qd6106QqSh14dJb/xpGH1cOhNSAeOb3LbMXnNZ0XYh6eTcNLfy6cpa2zQ39oW3XYRAJZ1O7wHkcxcSS/zZOQ4HP27qoRvJhqBSfPzcHxvBJ8VnI6pthjls+A+xDmJJkhHXxCp8z37yvRA+/bx5xw9fiNRsL0sAcEa5uiavt7J2ERw9yZ04Q/tjX1Upzzm2fOsLhPUmcjGhfjLEshQmwtL0J5wZcta9x9ndxbFe8f/h5bQU=) 2025-11-23 00:19:15.072407 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL6ueXwPhrPvG9ywSfEHh+RHSlJKDCK//iH4ueRlesAcD688JdNFYKIoPz0ryOLriJdiPJ1lKZ9GOCLxdy+uReo=) 2025-11-23 00:19:15.072430 | orchestrator | 2025-11-23 00:19:15.072451 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:15.072473 | orchestrator | Sunday 23 November 2025 00:19:12 +0000 (0:00:00.951) 0:00:18.621 ******* 2025-11-23 00:19:15.072494 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDO7J8InKPNx6x/mgNyeGbNWuiA4EuakpwYHp1FzcUofnEGnnnreE8Y+NxbmkwteNJtA5wUDmoqAQ9qnvKB77Wtgjy1Nc9bDZiklXoTBaZaTVn2hwSQUt6U6D/wtVvniyT/ezSNMwNDJrMdwDPydpM0nVYfNlxQnCL5kCT1rbwfp/hXT351AvqQTEGQjPB+4tfq+M7eA34+xhXPZCuS6p8KqAyP7bRMVXis08NZWSI8nmobR3RJQjpdFq/WgggE2s7vDkQ8bJp9sosbNT+l8xEpC5f25IbkktP4uQSVxw9+CMGJJ7RooqrvVWHpClKOWZQilmpRiGLzlTNwrRyCQW3a/OrehvkO3i2XmWjoxbt1nrSLwdlX9QUfJdgS+mTeqECZRrEYQcJhAIY8OcXhDbWObTD6GsnJ6cJgodA+xEQfPUrmeyMcZ7R+XvgjjmfLdfRbd5VLU4qjTo8e8p+dQlGr4PmxD5TAo5tTtGj4WlrKaiQKwOpHri810rXDpOwj/s=) 2025-11-23 00:19:15.072511 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwuRMKaqb2vIaPNkMULx3np3NbfpeGQt7pUC/HUBNENLT3Z4ufN3uPV5jcFlH9WWpyAsnMuc3d6Qfy3MavJIVk=) 2025-11-23 00:19:15.072526 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEVBQU/xRzaDfl6cctO+k2vMmNMwdgwujD4tMAy5ooPE) 2025-11-23 00:19:15.072548 | orchestrator | 2025-11-23 00:19:15.072559 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:15.072571 | orchestrator | Sunday 23 November 2025 00:19:13 +0000 (0:00:00.958) 0:00:19.580 ******* 2025-11-23 00:19:15.072583 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/+2vVqncFlqX/DbJsSDgiDt94n4MrzDRXl/O748OL6240Hg0X5LQpz4ArwJH8bfPrq4jrUbkv67VSTvwhg4iWimc0Rfz3Ningd+E9T+G7KqNwAGvAEkkja/1Xn1whm4CmX0LHBbDpIG8HrI7VP+Me9kFFwmBtTouCpdNOByWLRZVd7ciSbMnPOagx3lantwVM4qss1bbr8noWRk3ee2yPPfJLwsG9Agb0L8qV4bpQsUE8tDs41nQ6Ju5s8SINF+SypZHjZCPLtCzjvJzCj0zNVUYOmkEH4cb3bZB8CCKXvDoyKCE9cAyEYk52yoIWDSY+jKJdxHPgVjWHS6MfIOFD53lqcUhQA12LSCwny7YxKab99snsyT1olJcHt/bKgfO8lf7awt4WGTJhvmd+gMLU/86ZkvAXxT+Os87F7yZw6+XtLsSiD/Rq1nID7KSKUbnS63no5qh9Y3ceE2RGvI+IEsLkZXGOV7V4xvSKzUHUYMLubP6S1NTN7VVEB8AXs8s=) 2025-11-23 00:19:15.072669 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL/O6/iqxzIrvvZKyEaThCHLKZc5Wwn0idXDQ/qTTgBFjSgZvLkR9VbclnuGdEzj1HRwAESW0Q+AZqEm5T0hNDU=) 2025-11-23 00:19:15.072681 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMcKc5MmCkmJ0JbcrCiAVwy0yyHpbCjms2F6hZCJ8nw1) 2025-11-23 00:19:15.072693 | orchestrator | 2025-11-23 00:19:15.072704 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:15.072717 | orchestrator | Sunday 23 November 2025 00:19:14 +0000 (0:00:00.962) 0:00:20.542 ******* 2025-11-23 00:19:15.072736 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbsxQWFDCv5Exof4GKWB+uyG0EcsRElDoYBTeja7sEp4vJSo5fdUtCevwh7h9Jd8cr01933bFNY+2sosYYaug4=) 2025-11-23 00:19:15.072749 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCySQmsz04c5zvM51H0M+CHJtlGOLyPBnv3Mz2yiPjiRgHdIERnoUMaKK5Dy1hdpKlddWzH3O2/oFBbEDX+txWwLon8r6ZH1KXti6tSzfsfTgVnhkUNlBfdN6N1rsvDeVCqBY5HPiFpF0qZN38EQUu5XJaRjgofTALC/DjSGi4l1uVedj1bPv4cOZQddtM/XfBNwJ1koJ4FEsFWR/U6kLZuV0S875QKF9VOaWDIYHvvfH57nyIuafUoSu6FiC/lt9zRJEw68WfS1KIP1D+jldck6OHRmOIjfQQ11/yqKYsmXNSVEhMEtgX3y6sMj8JRljzldeq59A05CQslFyAA9RikihZBE4VG1CwM+EFkO8Q7EhZtM3c+Wlo4+2SH23PscMPy7D+ttrJxAbjOivzh6gZQ7TwmKBXdXlnD4PbhEdshLxaadHf2I5cMLRh+qx3b4eVTomxPCqeN9Lt4EFsZkF2jEPeBvEpDom97IovyGBx9tV8v1sz5UJ2U8LNE/jIbhBM=) 2025-11-23 00:19:15.072777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrrsa1LtJ7ebSVkvsAZSQ65WYQJ8Dw3bMAPy4X0j/E7) 2025-11-23 00:19:18.830378 | orchestrator | 2025-11-23 00:19:18.830491 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:18.830508 | orchestrator | Sunday 23 November 2025 00:19:15 +0000 (0:00:01.004) 0:00:21.547 ******* 2025-11-23 00:19:18.830521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICkH2BuBEtOwGFg/ihgDD0iRAPfbjHKPje23afAJjrEP) 2025-11-23 00:19:18.830537 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwnDucXEtxcmnrqwpOYClD7Ygb1OnHAx58fJFrPqqGdfCxtFiamahd/+VwoDydnd/pB18t+x1aD2wWdj5AFTrielTVqSaxvgVA/F64qn0vPFgne42mEylEcCEhaOQATY796rFWEZSqMfLRdgnVYRjJWn0m/f1UMh344r+KLw7x4Qs8YJci3KVn0mPD36CmL6JKDPft2iS9k+NR7AeL2UE8BE9k2+KFp3COgpsEHUUsymILQRV1cpqQC03ocU2F44Q3AhKIyBokyFy+3Y5KSj5s+amSz/QWhxTEj1/3LLLCDaiO4UFuD0JmgCpPpNRa7e7NOSn0YaE99raQqZqpFwxeSQpaim5R9cf0fHIS68RXxoPFqrs0o3YxY4q4UUHCftf1OUnaksDeYBoTNjgD6bab3bN12HSBkKX0ho1qIJkpCVR2lk4olmneKdPxJuNzUlgxkZv1ld1vWCBF/CZCbwNEPqaJ7aQBu9moYe1Wal+ZBmzFvC/0bmYPf8VyW1amQak=) 2025-11-23 00:19:18.830553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEdgVvCOaN8n9iJiryk+Fhiz0bftH/YpaX5TlEiEGeQDsO52S/c727ni61/fShgA7DzldnRnM2P3bDnFxxivMFM=) 2025-11-23 00:19:18.830566 | orchestrator | 2025-11-23 00:19:18.830577 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:18.830672 | orchestrator | Sunday 23 November 2025 00:19:15 +0000 (0:00:00.929) 0:00:22.477 ******* 2025-11-23 00:19:18.830687 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1ZPENK3plBtESJUHqvljssYZgTjVv5yDh35QXeW16P24ZbUrYfHymobEShIPwDK9xsZbmoOTLK+7nLMjmuzahea2zor4fXISW/RSCmULEgQphgIOtqwWOjO8qrP/+rT6bVlL1QkYErcani5wZafrYQxqVjYhtswZXr4s08yWTSO+mi4DL8KNHGYyVZs3bismYWgyOv+/wkUnoZW5kpQGI7TTi70ogiERzeg8r72zxIZUrnMhlznX7uc8MUiA1R4vSvpYxUd/UWbDeeUIM/1y3rxTW92Mqr7G7HIVwZqpp6+J62j3toRMZ3z5P+1uhmm9y6ZmDwzTZdBpsWfKmy2IH96pWV50dSPS7E40e3gOxLjS+eIK+BFNeDjlmiFJVBAyU+sXxai9iTYUREKrH+4g1PWwXeY8uHtzyu77ZzrkqtpXZT2DyoJbjk8Ase4mtsGPzNIhpUPWRT2osqWJXG3i2riyY8DrKbVB2cigwxjcT+AXxd8uhGdimL6dIRKfYMhc=) 2025-11-23 00:19:18.830699 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHMsqT+NJ6rdzD6nOaxzBq1fqqF6ZylXV2qAjZEiu7tJaOtu9YsBRznGIWLMCHuSkENhr5jBmvwa1oa1MV58gYc=) 2025-11-23 00:19:18.830710 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPQkmlEWEdQtT2xs6arhzeqTGWg17s/6MAqkokdwkg+J) 2025-11-23 00:19:18.830721 | orchestrator | 2025-11-23 00:19:18.830732 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-23 00:19:18.830743 | orchestrator | Sunday 23 November 2025 00:19:16 +0000 (0:00:00.943) 0:00:23.421 ******* 2025-11-23 00:19:18.830754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHOsfTWvbgeezl/e0kFSrgfa1ZCarQfFu5u2gELj5qPG) 2025-11-23 00:19:18.830765 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI1cc+wF9DjM6gyddXnbsOcGboPPgnKx6FaPdCfJ8BMsi6ES7LbCjHKOK4UvpAEuhIrb4ONoe+3LiWuOb9uymmj1F3Htx+u9vUCwDJZ31C1fGYEDcKUftpcEtCM4PeTRT6mV525fjAJhtamQ4/4ZUUa8TiUsd+w/7ZsIEqGbpijvMtbdzLttbil2atI7kDUg4OTRk3Il18MDobWLNTSHDW2fpxx9kAv6LTt9PVnwMJvGKa1UiItZO731BojQ+w0ajrVPFB+df0rcval3VBVgv9bZAfEYimgBFG2PejKQ2fk1lRqPlncIv1UrKlqxc27j7e6wtDtWU6pJFFylt01/w7cFGXEBCboJ/IS6yxnKw25qZpAaOPCXoXHwwsM19muchwLNeEldi1k3lxO+YrWBj8z9uEQTc8U/Tu8k9QhFEsB3A6+ARS71s5R1wJ/CkBzuSSz20s5OPveFQKFyfvl+IV1whkUJperJqJkbWFAkYJrvjgCprtG7R8403l5zcRwpM=) 2025-11-23 00:19:18.830776 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL3r6orVTHGVLmZtiqSIyxa2ICAdZ0Z+u8Yu0sVC85p6p8rTF27WsNPpjggvrXw1guhN6hrKcMP1fvTWGU4s6kE=) 2025-11-23 00:19:18.830787 | orchestrator | 2025-11-23 00:19:18.830797 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-11-23 00:19:18.830808 | orchestrator | Sunday 23 November 2025 00:19:17 +0000 (0:00:00.947) 0:00:24.368 ******* 2025-11-23 00:19:18.830819 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-23 00:19:18.830830 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-23 00:19:18.830840 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-23 00:19:18.830851 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-23 00:19:18.830861 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-23 00:19:18.830872 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-23 00:19:18.830882 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-23 00:19:18.830893 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:19:18.830904 | orchestrator | 2025-11-23 00:19:18.830934 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-11-23 00:19:18.830947 | orchestrator | Sunday 23 November 2025 00:19:18 +0000 (0:00:00.134) 0:00:24.503 ******* 2025-11-23 00:19:18.830959 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:19:18.830971 | orchestrator | 2025-11-23 00:19:18.830984 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-11-23 00:19:18.830996 | orchestrator | Sunday 23 November 2025 00:19:18 +0000 (0:00:00.053) 0:00:24.556 ******* 2025-11-23 00:19:18.831008 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:19:18.831028 | orchestrator | 2025-11-23 00:19:18.831040 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-11-23 00:19:18.831053 | orchestrator | Sunday 23 November 2025 00:19:18 +0000 (0:00:00.048) 0:00:24.605 ******* 2025-11-23 00:19:18.831065 | orchestrator | changed: [testbed-manager] 2025-11-23 00:19:18.831077 | orchestrator | 2025-11-23 00:19:18.831089 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:19:18.831101 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-23 00:19:18.831114 | orchestrator | 2025-11-23 00:19:18.831127 | orchestrator | 2025-11-23 00:19:18.831139 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:19:18.831151 | orchestrator | Sunday 23 November 2025 00:19:18 +0000 (0:00:00.575) 0:00:25.180 ******* 2025-11-23 00:19:18.831163 | orchestrator | =============================================================================== 2025-11-23 00:19:18.831176 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.66s 2025-11-23 00:19:18.831188 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.94s 2025-11-23 00:19:18.831201 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-11-23 00:19:18.831213 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-11-23 00:19:18.831225 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-11-23 00:19:18.831254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-11-23 00:19:18.831267 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-11-23 00:19:18.831279 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-11-23 00:19:18.831291 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-11-23 00:19:18.831302 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-11-23 00:19:18.831312 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-11-23 00:19:18.831323 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-11-23 00:19:18.831333 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-11-23 00:19:18.831344 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-11-23 00:19:18.831354 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2025-11-23 00:19:18.831369 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.89s 2025-11-23 00:19:18.831380 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.58s 2025-11-23 00:19:18.831391 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.15s 2025-11-23 00:19:18.831402 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2025-11-23 00:19:18.831413 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.13s 2025-11-23 00:19:19.005177 | orchestrator | + osism apply squid 2025-11-23 00:19:30.755350 | orchestrator | 2025-11-23 00:19:30 | INFO  | Task 846a0fbf-35e9-4133-b375-31fd897ce5d2 (squid) was prepared for execution. 2025-11-23 00:19:30.755463 | orchestrator | 2025-11-23 00:19:30 | INFO  | It takes a moment until task 846a0fbf-35e9-4133-b375-31fd897ce5d2 (squid) has been started and output is visible here. 2025-11-23 00:21:23.040639 | orchestrator | 2025-11-23 00:21:23.040757 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-11-23 00:21:23.040773 | orchestrator | 2025-11-23 00:21:23.040785 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-11-23 00:21:23.040797 | orchestrator | Sunday 23 November 2025 00:19:34 +0000 (0:00:00.119) 0:00:00.119 ******* 2025-11-23 00:21:23.040837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-11-23 00:21:23.040849 | orchestrator | 2025-11-23 00:21:23.040860 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-11-23 00:21:23.040872 | orchestrator | Sunday 23 November 2025 00:19:34 +0000 (0:00:00.085) 0:00:00.205 ******* 2025-11-23 00:21:23.040883 | orchestrator | ok: [testbed-manager] 2025-11-23 00:21:23.040894 | orchestrator | 2025-11-23 00:21:23.040905 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-11-23 00:21:23.040915 | orchestrator | Sunday 23 November 2025 00:19:35 +0000 (0:00:01.080) 0:00:01.285 ******* 2025-11-23 00:21:23.040926 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-11-23 00:21:23.040937 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-11-23 00:21:23.040948 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-11-23 00:21:23.040958 | orchestrator | 2025-11-23 00:21:23.040969 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-11-23 00:21:23.040979 | orchestrator | Sunday 23 November 2025 00:19:36 +0000 (0:00:01.056) 0:00:02.342 ******* 2025-11-23 00:21:23.040990 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-11-23 00:21:23.041000 | orchestrator | 2025-11-23 00:21:23.041011 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-11-23 00:21:23.041021 | orchestrator | Sunday 23 November 2025 00:19:37 +0000 (0:00:00.984) 0:00:03.326 ******* 2025-11-23 00:21:23.041032 | orchestrator | ok: [testbed-manager] 2025-11-23 00:21:23.041042 | orchestrator | 2025-11-23 00:21:23.041053 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-11-23 00:21:23.041063 | orchestrator | Sunday 23 November 2025 00:19:37 +0000 (0:00:00.302) 0:00:03.628 ******* 2025-11-23 00:21:23.041074 | orchestrator | changed: [testbed-manager] 2025-11-23 00:21:23.041084 | orchestrator | 2025-11-23 00:21:23.041095 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-11-23 00:21:23.041105 | orchestrator | Sunday 23 November 2025 00:19:38 +0000 (0:00:00.815) 0:00:04.444 ******* 2025-11-23 00:21:23.041116 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-11-23 00:21:23.041127 | orchestrator | ok: [testbed-manager] 2025-11-23 00:21:23.041138 | orchestrator | 2025-11-23 00:21:23.041150 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-11-23 00:21:23.041162 | orchestrator | Sunday 23 November 2025 00:20:10 +0000 (0:00:31.570) 0:00:36.015 ******* 2025-11-23 00:21:23.041174 | orchestrator | changed: [testbed-manager] 2025-11-23 00:21:23.041186 | orchestrator | 2025-11-23 00:21:23.041198 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-11-23 00:21:23.041211 | orchestrator | Sunday 23 November 2025 00:20:22 +0000 (0:00:11.930) 0:00:47.946 ******* 2025-11-23 00:21:23.041223 | orchestrator | Pausing for 60 seconds 2025-11-23 00:21:23.041237 | orchestrator | changed: [testbed-manager] 2025-11-23 00:21:23.041249 | orchestrator | 2025-11-23 00:21:23.041261 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-11-23 00:21:23.041274 | orchestrator | Sunday 23 November 2025 00:21:22 +0000 (0:01:00.077) 0:01:48.023 ******* 2025-11-23 00:21:23.041286 | orchestrator | ok: [testbed-manager] 2025-11-23 00:21:23.041298 | orchestrator | 2025-11-23 00:21:23.041310 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-11-23 00:21:23.041322 | orchestrator | Sunday 23 November 2025 00:21:22 +0000 (0:00:00.059) 0:01:48.083 ******* 2025-11-23 00:21:23.041334 | orchestrator | changed: [testbed-manager] 2025-11-23 00:21:23.041346 | orchestrator | 2025-11-23 00:21:23.041358 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:21:23.041371 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:21:23.041391 | orchestrator | 2025-11-23 00:21:23.041403 | orchestrator | 2025-11-23 00:21:23.041415 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:21:23.041427 | orchestrator | Sunday 23 November 2025 00:21:22 +0000 (0:00:00.561) 0:01:48.644 ******* 2025-11-23 00:21:23.041440 | orchestrator | =============================================================================== 2025-11-23 00:21:23.041452 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-11-23 00:21:23.041464 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.57s 2025-11-23 00:21:23.041477 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.93s 2025-11-23 00:21:23.041489 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.08s 2025-11-23 00:21:23.041501 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.06s 2025-11-23 00:21:23.041514 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.98s 2025-11-23 00:21:23.041550 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.82s 2025-11-23 00:21:23.041561 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.56s 2025-11-23 00:21:23.041572 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.30s 2025-11-23 00:21:23.041582 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-11-23 00:21:23.041593 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-11-23 00:21:23.216618 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-23 00:21:23.216850 | orchestrator | ++ semver latest 9.0.0 2025-11-23 00:21:23.256806 | orchestrator | + [[ -1 -lt 0 ]] 2025-11-23 00:21:23.256894 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-23 00:21:23.257063 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-11-23 00:21:35.166951 | orchestrator | 2025-11-23 00:21:35 | INFO  | Task c18341e0-7ea8-4538-9f5d-48eb41cd60f1 (operator) was prepared for execution. 2025-11-23 00:21:35.167055 | orchestrator | 2025-11-23 00:21:35 | INFO  | It takes a moment until task c18341e0-7ea8-4538-9f5d-48eb41cd60f1 (operator) has been started and output is visible here. 2025-11-23 00:21:50.662235 | orchestrator | 2025-11-23 00:21:50.662343 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-11-23 00:21:50.662360 | orchestrator | 2025-11-23 00:21:50.662372 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 00:21:50.662384 | orchestrator | Sunday 23 November 2025 00:21:38 +0000 (0:00:00.104) 0:00:00.104 ******* 2025-11-23 00:21:50.662395 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:21:50.662407 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:21:50.662418 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:21:50.662429 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:21:50.662440 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:21:50.662451 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:21:50.662462 | orchestrator | 2025-11-23 00:21:50.662473 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-11-23 00:21:50.662484 | orchestrator | Sunday 23 November 2025 00:21:42 +0000 (0:00:04.164) 0:00:04.269 ******* 2025-11-23 00:21:50.662495 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:21:50.662506 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:21:50.662597 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:21:50.662619 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:21:50.662638 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:21:50.662657 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:21:50.662676 | orchestrator | 2025-11-23 00:21:50.662692 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-11-23 00:21:50.662704 | orchestrator | 2025-11-23 00:21:50.662715 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-23 00:21:50.662726 | orchestrator | Sunday 23 November 2025 00:21:43 +0000 (0:00:00.727) 0:00:04.997 ******* 2025-11-23 00:21:50.662737 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:21:50.662773 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:21:50.662787 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:21:50.662817 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:21:50.662830 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:21:50.662842 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:21:50.662854 | orchestrator | 2025-11-23 00:21:50.662866 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-23 00:21:50.662878 | orchestrator | Sunday 23 November 2025 00:21:43 +0000 (0:00:00.123) 0:00:05.120 ******* 2025-11-23 00:21:50.662890 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:21:50.662902 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:21:50.662914 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:21:50.662927 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:21:50.662939 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:21:50.662951 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:21:50.662963 | orchestrator | 2025-11-23 00:21:50.662976 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-23 00:21:50.662987 | orchestrator | Sunday 23 November 2025 00:21:43 +0000 (0:00:00.129) 0:00:05.250 ******* 2025-11-23 00:21:50.662998 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:21:50.663010 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:21:50.663020 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:21:50.663031 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:21:50.663041 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:21:50.663052 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:21:50.663063 | orchestrator | 2025-11-23 00:21:50.663073 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-23 00:21:50.663084 | orchestrator | Sunday 23 November 2025 00:21:44 +0000 (0:00:00.656) 0:00:05.907 ******* 2025-11-23 00:21:50.663095 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:21:50.663105 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:21:50.663116 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:21:50.663126 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:21:50.663137 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:21:50.663148 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:21:50.663158 | orchestrator | 2025-11-23 00:21:50.663169 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-23 00:21:50.663180 | orchestrator | Sunday 23 November 2025 00:21:45 +0000 (0:00:00.737) 0:00:06.644 ******* 2025-11-23 00:21:50.663190 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-11-23 00:21:50.663201 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-11-23 00:21:50.663212 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-11-23 00:21:50.663223 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-11-23 00:21:50.663233 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-11-23 00:21:50.663244 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-11-23 00:21:50.663255 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-11-23 00:21:50.663267 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-11-23 00:21:50.663283 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-11-23 00:21:50.663294 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-11-23 00:21:50.663305 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-11-23 00:21:50.663315 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-11-23 00:21:50.663326 | orchestrator | 2025-11-23 00:21:50.663337 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-23 00:21:50.663348 | orchestrator | Sunday 23 November 2025 00:21:46 +0000 (0:00:01.144) 0:00:07.789 ******* 2025-11-23 00:21:50.663359 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:21:50.663370 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:21:50.663381 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:21:50.663391 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:21:50.663402 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:21:50.663413 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:21:50.663424 | orchestrator | 2025-11-23 00:21:50.663443 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-23 00:21:50.663460 | orchestrator | Sunday 23 November 2025 00:21:47 +0000 (0:00:01.125) 0:00:08.915 ******* 2025-11-23 00:21:50.663478 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-11-23 00:21:50.663494 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-11-23 00:21:50.663512 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-11-23 00:21:50.663560 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-11-23 00:21:50.663603 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-11-23 00:21:50.663623 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-11-23 00:21:50.663642 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-11-23 00:21:50.663654 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-11-23 00:21:50.663664 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-11-23 00:21:50.663675 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-11-23 00:21:50.663686 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-11-23 00:21:50.663697 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-11-23 00:21:50.663707 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-11-23 00:21:50.663718 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-11-23 00:21:50.663728 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-11-23 00:21:50.663739 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-11-23 00:21:50.663750 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-11-23 00:21:50.663761 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-11-23 00:21:50.663772 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-11-23 00:21:50.663783 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-11-23 00:21:50.663793 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-11-23 00:21:50.663804 | orchestrator | 2025-11-23 00:21:50.663815 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-23 00:21:50.663826 | orchestrator | Sunday 23 November 2025 00:21:48 +0000 (0:00:01.244) 0:00:10.159 ******* 2025-11-23 00:21:50.663837 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:21:50.663848 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:21:50.663859 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:21:50.663870 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:21:50.663880 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:21:50.663891 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:21:50.663901 | orchestrator | 2025-11-23 00:21:50.663912 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2025-11-23 00:21:50.663923 | orchestrator | Sunday 23 November 2025 00:21:48 +0000 (0:00:00.138) 0:00:10.298 ******* 2025-11-23 00:21:50.663934 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:21:50.663945 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:21:50.663955 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:21:50.663966 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:21:50.663977 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:21:50.663987 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:21:50.663998 | orchestrator | 2025-11-23 00:21:50.664009 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-23 00:21:50.664020 | orchestrator | Sunday 23 November 2025 00:21:49 +0000 (0:00:00.140) 0:00:10.438 ******* 2025-11-23 00:21:50.664031 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:21:50.664042 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:21:50.664052 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:21:50.664072 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:21:50.664083 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:21:50.664094 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:21:50.664105 | orchestrator | 2025-11-23 00:21:50.664116 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-23 00:21:50.664127 | orchestrator | Sunday 23 November 2025 00:21:49 +0000 (0:00:00.555) 0:00:10.993 ******* 2025-11-23 00:21:50.664137 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:21:50.664148 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:21:50.664159 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:21:50.664170 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:21:50.664180 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:21:50.664191 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:21:50.664202 | orchestrator | 2025-11-23 00:21:50.664212 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-23 00:21:50.664223 | orchestrator | Sunday 23 November 2025 00:21:49 +0000 (0:00:00.134) 0:00:11.127 ******* 2025-11-23 00:21:50.664234 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 00:21:50.664246 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-23 00:21:50.664257 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:21:50.664267 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:21:50.664278 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-23 00:21:50.664289 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:21:50.664300 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-23 00:21:50.664311 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:21:50.664321 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-23 00:21:50.664332 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-23 00:21:50.664343 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:21:50.664355 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:21:50.664373 | orchestrator | 2025-11-23 00:21:50.664392 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-23 00:21:50.664409 | orchestrator | Sunday 23 November 2025 00:21:50 +0000 (0:00:00.670) 0:00:11.798 ******* 2025-11-23 00:21:50.664420 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:21:50.664431 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:21:50.664442 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:21:50.664452 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:21:50.664463 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:21:50.664474 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:21:50.664484 | orchestrator | 2025-11-23 00:21:50.664495 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-23 00:21:50.664506 | orchestrator | Sunday 23 November 2025 00:21:50 +0000 (0:00:00.129) 0:00:11.928 ******* 2025-11-23 00:21:50.664546 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:21:50.664566 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:21:50.664586 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:21:50.664604 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:21:50.664633 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:21:51.757240 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:21:51.758185 | orchestrator | 2025-11-23 00:21:51.758276 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-23 00:21:51.758293 | orchestrator | Sunday 23 November 2025 00:21:50 +0000 (0:00:00.140) 0:00:12.068 ******* 2025-11-23 00:21:51.758305 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:21:51.758316 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:21:51.758327 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:21:51.758338 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:21:51.758348 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:21:51.758359 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:21:51.758371 | orchestrator | 2025-11-23 00:21:51.758382 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-23 00:21:51.758429 | orchestrator | Sunday 23 November 2025 00:21:50 +0000 (0:00:00.118) 0:00:12.187 ******* 2025-11-23 00:21:51.758441 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:21:51.758451 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:21:51.758462 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:21:51.758473 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:21:51.758484 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:21:51.758494 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:21:51.758505 | orchestrator | 2025-11-23 00:21:51.758542 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-23 00:21:51.758555 | orchestrator | Sunday 23 November 2025 00:21:51 +0000 (0:00:00.624) 0:00:12.811 ******* 2025-11-23 00:21:51.758566 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:21:51.758596 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:21:51.758607 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:21:51.758619 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:21:51.758630 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:21:51.758641 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:21:51.758652 | orchestrator | 2025-11-23 00:21:51.758664 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:21:51.758676 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 00:21:51.758690 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 00:21:51.758701 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 00:21:51.758713 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 00:21:51.758724 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 00:21:51.758735 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 00:21:51.758746 | orchestrator | 2025-11-23 00:21:51.758758 | orchestrator | 2025-11-23 00:21:51.758769 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:21:51.758781 | orchestrator | Sunday 23 November 2025 00:21:51 +0000 (0:00:00.206) 0:00:13.018 ******* 2025-11-23 00:21:51.758792 | orchestrator | =============================================================================== 2025-11-23 00:21:51.758803 | orchestrator | Gathering Facts --------------------------------------------------------- 4.16s 2025-11-23 00:21:51.758815 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.24s 2025-11-23 00:21:51.758826 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-11-23 00:21:51.758837 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.13s 2025-11-23 00:21:51.758849 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.74s 2025-11-23 00:21:51.758865 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-11-23 00:21:51.758877 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2025-11-23 00:21:51.758888 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2025-11-23 00:21:51.758899 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2025-11-23 00:21:51.758910 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-11-23 00:21:51.758922 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-11-23 00:21:51.758933 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-11-23 00:21:51.758951 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.14s 2025-11-23 00:21:51.758963 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2025-11-23 00:21:51.758974 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.13s 2025-11-23 00:21:51.758985 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2025-11-23 00:21:51.758996 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2025-11-23 00:21:51.759008 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.12s 2025-11-23 00:21:51.759019 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.12s 2025-11-23 00:21:51.940969 | orchestrator | + osism apply --environment custom facts 2025-11-23 00:21:53.636613 | orchestrator | 2025-11-23 00:21:53 | INFO  | Trying to run play facts in environment custom 2025-11-23 00:22:03.820802 | orchestrator | 2025-11-23 00:22:03 | INFO  | Task b1bf3268-9af3-4216-ae33-ea846e6d1f0a (facts) was prepared for execution. 2025-11-23 00:22:03.820905 | orchestrator | 2025-11-23 00:22:03 | INFO  | It takes a moment until task b1bf3268-9af3-4216-ae33-ea846e6d1f0a (facts) has been started and output is visible here. 2025-11-23 00:22:49.728646 | orchestrator | 2025-11-23 00:22:49.728765 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-11-23 00:22:49.728783 | orchestrator | 2025-11-23 00:22:49.728795 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-23 00:22:49.728807 | orchestrator | Sunday 23 November 2025 00:22:07 +0000 (0:00:00.072) 0:00:00.072 ******* 2025-11-23 00:22:49.728818 | orchestrator | ok: [testbed-manager] 2025-11-23 00:22:49.728831 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:22:49.728843 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:22:49.728854 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:22:49.728865 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:22:49.728876 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:22:49.728887 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:22:49.728898 | orchestrator | 2025-11-23 00:22:49.728909 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-11-23 00:22:49.728921 | orchestrator | Sunday 23 November 2025 00:22:08 +0000 (0:00:01.306) 0:00:01.379 ******* 2025-11-23 00:22:49.728933 | orchestrator | ok: [testbed-manager] 2025-11-23 00:22:49.728945 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:22:49.728958 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:22:49.728970 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:22:49.728982 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:22:49.728993 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:22:49.729006 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:22:49.729019 | orchestrator | 2025-11-23 00:22:49.729031 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-11-23 00:22:49.729043 | orchestrator | 2025-11-23 00:22:49.729056 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-23 00:22:49.729068 | orchestrator | Sunday 23 November 2025 00:22:09 +0000 (0:00:01.070) 0:00:02.450 ******* 2025-11-23 00:22:49.729080 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.729092 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.729104 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.729118 | orchestrator | 2025-11-23 00:22:49.729131 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-23 00:22:49.729146 | orchestrator | Sunday 23 November 2025 00:22:09 +0000 (0:00:00.072) 0:00:02.522 ******* 2025-11-23 00:22:49.729160 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.729173 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.729187 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.729201 | orchestrator | 2025-11-23 00:22:49.729215 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-23 00:22:49.729253 | orchestrator | Sunday 23 November 2025 00:22:10 +0000 (0:00:00.175) 0:00:02.697 ******* 2025-11-23 00:22:49.729266 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.729279 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.729290 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.729302 | orchestrator | 2025-11-23 00:22:49.729313 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-23 00:22:49.729325 | orchestrator | Sunday 23 November 2025 00:22:10 +0000 (0:00:00.169) 0:00:02.867 ******* 2025-11-23 00:22:49.729339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:22:49.729353 | orchestrator | 2025-11-23 00:22:49.729366 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-23 00:22:49.729377 | orchestrator | Sunday 23 November 2025 00:22:10 +0000 (0:00:00.109) 0:00:02.976 ******* 2025-11-23 00:22:49.729388 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.729399 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.729409 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.729421 | orchestrator | 2025-11-23 00:22:49.729431 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-23 00:22:49.729442 | orchestrator | Sunday 23 November 2025 00:22:10 +0000 (0:00:00.418) 0:00:03.394 ******* 2025-11-23 00:22:49.729453 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:22:49.729465 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:22:49.729475 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:22:49.729486 | orchestrator | 2025-11-23 00:22:49.729522 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-23 00:22:49.729532 | orchestrator | Sunday 23 November 2025 00:22:10 +0000 (0:00:00.103) 0:00:03.497 ******* 2025-11-23 00:22:49.729543 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:22:49.729554 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:22:49.729563 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:22:49.729574 | orchestrator | 2025-11-23 00:22:49.729584 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-23 00:22:49.729595 | orchestrator | Sunday 23 November 2025 00:22:11 +0000 (0:00:01.016) 0:00:04.514 ******* 2025-11-23 00:22:49.729605 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.729615 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.729625 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.729635 | orchestrator | 2025-11-23 00:22:49.729647 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-23 00:22:49.729658 | orchestrator | Sunday 23 November 2025 00:22:12 +0000 (0:00:00.458) 0:00:04.972 ******* 2025-11-23 00:22:49.729668 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:22:49.729679 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:22:49.729690 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:22:49.729701 | orchestrator | 2025-11-23 00:22:49.729712 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-23 00:22:49.729724 | orchestrator | Sunday 23 November 2025 00:22:13 +0000 (0:00:01.052) 0:00:06.025 ******* 2025-11-23 00:22:49.729736 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:22:49.729748 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:22:49.729760 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:22:49.729772 | orchestrator | 2025-11-23 00:22:49.729782 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-11-23 00:22:49.729793 | orchestrator | Sunday 23 November 2025 00:22:33 +0000 (0:00:20.112) 0:00:26.137 ******* 2025-11-23 00:22:49.729803 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:22:49.729815 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:22:49.729827 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:22:49.729838 | orchestrator | 2025-11-23 00:22:49.729849 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-11-23 00:22:49.729882 | orchestrator | Sunday 23 November 2025 00:22:33 +0000 (0:00:00.072) 0:00:26.209 ******* 2025-11-23 00:22:49.729901 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:22:49.729911 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:22:49.729922 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:22:49.729928 | orchestrator | 2025-11-23 00:22:49.729935 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-23 00:22:49.729942 | orchestrator | Sunday 23 November 2025 00:22:40 +0000 (0:00:07.056) 0:00:33.266 ******* 2025-11-23 00:22:49.729949 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.729955 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.729962 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.729969 | orchestrator | 2025-11-23 00:22:49.729975 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-23 00:22:49.729982 | orchestrator | Sunday 23 November 2025 00:22:40 +0000 (0:00:00.395) 0:00:33.661 ******* 2025-11-23 00:22:49.729989 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-11-23 00:22:49.729996 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-11-23 00:22:49.730002 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-11-23 00:22:49.730009 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-11-23 00:22:49.730066 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-11-23 00:22:49.730074 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-11-23 00:22:49.730081 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-11-23 00:22:49.730087 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-11-23 00:22:49.730094 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-11-23 00:22:49.730100 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-11-23 00:22:49.730107 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-11-23 00:22:49.730114 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-11-23 00:22:49.730120 | orchestrator | 2025-11-23 00:22:49.730165 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-23 00:22:49.730173 | orchestrator | Sunday 23 November 2025 00:22:44 +0000 (0:00:03.136) 0:00:36.797 ******* 2025-11-23 00:22:49.730179 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.730186 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.730192 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.730199 | orchestrator | 2025-11-23 00:22:49.730205 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-23 00:22:49.730212 | orchestrator | 2025-11-23 00:22:49.730218 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-23 00:22:49.730225 | orchestrator | Sunday 23 November 2025 00:22:45 +0000 (0:00:01.020) 0:00:37.818 ******* 2025-11-23 00:22:49.730232 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:22:49.730238 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:22:49.730245 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:22:49.730251 | orchestrator | ok: [testbed-manager] 2025-11-23 00:22:49.730258 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:22:49.730264 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:22:49.730271 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:22:49.730304 | orchestrator | 2025-11-23 00:22:49.730312 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:22:49.730320 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:22:49.730331 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:22:49.730339 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:22:49.730346 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:22:49.730359 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:22:49.730366 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:22:49.730373 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:22:49.730380 | orchestrator | 2025-11-23 00:22:49.730386 | orchestrator | 2025-11-23 00:22:49.730393 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:22:49.730400 | orchestrator | Sunday 23 November 2025 00:22:49 +0000 (0:00:04.553) 0:00:42.371 ******* 2025-11-23 00:22:49.730406 | orchestrator | =============================================================================== 2025-11-23 00:22:49.730413 | orchestrator | osism.commons.repository : Update package cache ------------------------ 20.11s 2025-11-23 00:22:49.730420 | orchestrator | Install required packages (Debian) -------------------------------------- 7.06s 2025-11-23 00:22:49.730426 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.55s 2025-11-23 00:22:49.730433 | orchestrator | Copy fact files --------------------------------------------------------- 3.14s 2025-11-23 00:22:49.730439 | orchestrator | Create custom facts directory ------------------------------------------- 1.31s 2025-11-23 00:22:49.730446 | orchestrator | Copy fact file ---------------------------------------------------------- 1.07s 2025-11-23 00:22:49.730459 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-11-23 00:22:49.863808 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.02s 2025-11-23 00:22:49.863903 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2025-11-23 00:22:49.863918 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-11-23 00:22:49.863930 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-11-23 00:22:49.863941 | orchestrator | Create custom facts directory ------------------------------------------- 0.40s 2025-11-23 00:22:49.863952 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-11-23 00:22:49.863962 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-11-23 00:22:49.863973 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2025-11-23 00:22:49.863985 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-11-23 00:22:49.863995 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.07s 2025-11-23 00:22:49.864006 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2025-11-23 00:22:50.047877 | orchestrator | + osism apply bootstrap 2025-11-23 00:23:01.906709 | orchestrator | 2025-11-23 00:23:01 | INFO  | Task 29a4c53f-1c88-4fd7-9f5e-2aa7c52c0f6a (bootstrap) was prepared for execution. 2025-11-23 00:23:01.906827 | orchestrator | 2025-11-23 00:23:01 | INFO  | It takes a moment until task 29a4c53f-1c88-4fd7-9f5e-2aa7c52c0f6a (bootstrap) has been started and output is visible here. 2025-11-23 00:23:16.548650 | orchestrator | 2025-11-23 00:23:16.548762 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-11-23 00:23:16.548780 | orchestrator | 2025-11-23 00:23:16.548792 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-11-23 00:23:16.548804 | orchestrator | Sunday 23 November 2025 00:23:05 +0000 (0:00:00.123) 0:00:00.123 ******* 2025-11-23 00:23:16.548815 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:16.548827 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:16.548839 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:16.548875 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:16.548885 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:16.548895 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:16.548905 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:16.548916 | orchestrator | 2025-11-23 00:23:16.548927 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-23 00:23:16.548938 | orchestrator | 2025-11-23 00:23:16.548950 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-23 00:23:16.548961 | orchestrator | Sunday 23 November 2025 00:23:05 +0000 (0:00:00.185) 0:00:00.309 ******* 2025-11-23 00:23:16.548972 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:16.548979 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:16.548986 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:16.548992 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:16.548999 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:16.549006 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:16.549013 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:16.549019 | orchestrator | 2025-11-23 00:23:16.549026 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-11-23 00:23:16.549033 | orchestrator | 2025-11-23 00:23:16.549039 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-23 00:23:16.549046 | orchestrator | Sunday 23 November 2025 00:23:09 +0000 (0:00:03.646) 0:00:03.956 ******* 2025-11-23 00:23:16.549065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-23 00:23:16.549073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-23 00:23:16.549080 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-23 00:23:16.549087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-23 00:23:16.549093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-23 00:23:16.549100 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-23 00:23:16.549106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-11-23 00:23:16.549113 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-23 00:23:16.549119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-11-23 00:23:16.549126 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-23 00:23:16.549132 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-11-23 00:23:16.549139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-11-23 00:23:16.549146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-23 00:23:16.549153 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-23 00:23:16.549159 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-11-23 00:23:16.549166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-11-23 00:23:16.549172 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-11-23 00:23:16.549180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-23 00:23:16.549187 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-11-23 00:23:16.549195 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-11-23 00:23:16.549202 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-23 00:23:16.549210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-23 00:23:16.549217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-23 00:23:16.549225 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-11-23 00:23:16.549232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:23:16.549240 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:23:16.549247 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-23 00:23:16.549255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:23:16.549262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-23 00:23:16.549276 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-11-23 00:23:16.549283 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:23:16.549291 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-23 00:23:16.549298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:23:16.549306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-11-23 00:23:16.549313 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:23:16.549321 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-23 00:23:16.549328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-11-23 00:23:16.549336 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-11-23 00:23:16.549343 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:23:16.549350 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-11-23 00:23:16.549358 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-23 00:23:16.549366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-11-23 00:23:16.549373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-11-23 00:23:16.549381 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-23 00:23:16.549388 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-11-23 00:23:16.549396 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-11-23 00:23:16.549404 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-23 00:23:16.549425 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-23 00:23:16.549432 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-11-23 00:23:16.549439 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:23:16.549445 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-11-23 00:23:16.549452 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:23:16.549458 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-23 00:23:16.549465 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-23 00:23:16.549472 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-23 00:23:16.549478 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:16.549508 | orchestrator | 2025-11-23 00:23:16.549515 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-11-23 00:23:16.549522 | orchestrator | 2025-11-23 00:23:16.549529 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-11-23 00:23:16.549535 | orchestrator | Sunday 23 November 2025 00:23:09 +0000 (0:00:00.406) 0:00:04.363 ******* 2025-11-23 00:23:16.549542 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:16.549548 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:16.549555 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:16.549571 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:16.549578 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:16.549585 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:16.549599 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:16.549606 | orchestrator | 2025-11-23 00:23:16.549613 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-11-23 00:23:16.549620 | orchestrator | Sunday 23 November 2025 00:23:11 +0000 (0:00:01.115) 0:00:05.478 ******* 2025-11-23 00:23:16.549626 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:16.549633 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:16.549639 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:16.549646 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:16.549652 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:16.549659 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:16.549665 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:16.549672 | orchestrator | 2025-11-23 00:23:16.549678 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-11-23 00:23:16.549685 | orchestrator | Sunday 23 November 2025 00:23:12 +0000 (0:00:01.162) 0:00:06.640 ******* 2025-11-23 00:23:16.549698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:23:16.549707 | orchestrator | 2025-11-23 00:23:16.549714 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-11-23 00:23:16.549720 | orchestrator | Sunday 23 November 2025 00:23:12 +0000 (0:00:00.241) 0:00:06.882 ******* 2025-11-23 00:23:16.549727 | orchestrator | changed: [testbed-manager] 2025-11-23 00:23:16.549734 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:16.549740 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:23:16.549747 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:23:16.549753 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:23:16.549760 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:23:16.549766 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:23:16.549773 | orchestrator | 2025-11-23 00:23:16.549779 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-11-23 00:23:16.549792 | orchestrator | Sunday 23 November 2025 00:23:14 +0000 (0:00:01.748) 0:00:08.630 ******* 2025-11-23 00:23:16.549799 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:16.549807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:23:16.549815 | orchestrator | 2025-11-23 00:23:16.549822 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-11-23 00:23:16.549829 | orchestrator | Sunday 23 November 2025 00:23:14 +0000 (0:00:00.213) 0:00:08.843 ******* 2025-11-23 00:23:16.549835 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:16.549842 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:23:16.549848 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:23:16.549855 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:23:16.549861 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:23:16.549867 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:23:16.549874 | orchestrator | 2025-11-23 00:23:16.549880 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-11-23 00:23:16.549887 | orchestrator | Sunday 23 November 2025 00:23:15 +0000 (0:00:00.894) 0:00:09.738 ******* 2025-11-23 00:23:16.549893 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:16.549900 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:23:16.549906 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:23:16.549913 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:23:16.549919 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:23:16.549926 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:16.549932 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:23:16.549939 | orchestrator | 2025-11-23 00:23:16.549946 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-11-23 00:23:16.549952 | orchestrator | Sunday 23 November 2025 00:23:15 +0000 (0:00:00.573) 0:00:10.312 ******* 2025-11-23 00:23:16.549959 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:23:16.549965 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:23:16.549972 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:23:16.549978 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:23:16.549985 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:23:16.549991 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:23:16.549998 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:16.550004 | orchestrator | 2025-11-23 00:23:16.550011 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-23 00:23:16.550067 | orchestrator | Sunday 23 November 2025 00:23:16 +0000 (0:00:00.525) 0:00:10.837 ******* 2025-11-23 00:23:16.550075 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:23:16.550081 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:23:16.550094 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:23:26.906430 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:23:26.906601 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:23:26.906623 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:23:26.906636 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:26.906647 | orchestrator | 2025-11-23 00:23:26.906660 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-23 00:23:26.906673 | orchestrator | Sunday 23 November 2025 00:23:16 +0000 (0:00:00.167) 0:00:11.005 ******* 2025-11-23 00:23:26.906685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:23:26.906715 | orchestrator | 2025-11-23 00:23:26.906726 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-23 00:23:26.906738 | orchestrator | Sunday 23 November 2025 00:23:16 +0000 (0:00:00.223) 0:00:11.229 ******* 2025-11-23 00:23:26.906749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:23:26.906760 | orchestrator | 2025-11-23 00:23:26.906773 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-23 00:23:26.906792 | orchestrator | Sunday 23 November 2025 00:23:17 +0000 (0:00:00.244) 0:00:11.473 ******* 2025-11-23 00:23:26.906817 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.906838 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.906851 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.906862 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.906873 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.906884 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.906895 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.906906 | orchestrator | 2025-11-23 00:23:26.906916 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-23 00:23:26.906927 | orchestrator | Sunday 23 November 2025 00:23:18 +0000 (0:00:01.225) 0:00:12.698 ******* 2025-11-23 00:23:26.906941 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:23:26.906953 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:23:26.906966 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:23:26.906978 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:23:26.906990 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:23:26.907002 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:23:26.907014 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:26.907026 | orchestrator | 2025-11-23 00:23:26.907038 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-23 00:23:26.907051 | orchestrator | Sunday 23 November 2025 00:23:18 +0000 (0:00:00.159) 0:00:12.858 ******* 2025-11-23 00:23:26.907063 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.907075 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.907088 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.907100 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.907113 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.907125 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.907137 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.907149 | orchestrator | 2025-11-23 00:23:26.907162 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-23 00:23:26.907174 | orchestrator | Sunday 23 November 2025 00:23:18 +0000 (0:00:00.443) 0:00:13.302 ******* 2025-11-23 00:23:26.907187 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:23:26.907199 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:23:26.907214 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:23:26.907231 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:23:26.907242 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:23:26.907258 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:23:26.907277 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:26.907311 | orchestrator | 2025-11-23 00:23:26.907328 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-23 00:23:26.907347 | orchestrator | Sunday 23 November 2025 00:23:19 +0000 (0:00:00.202) 0:00:13.504 ******* 2025-11-23 00:23:26.907359 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:26.907370 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:23:26.907381 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:23:26.907391 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.907402 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:23:26.907413 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:23:26.907424 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:23:26.907434 | orchestrator | 2025-11-23 00:23:26.907445 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-23 00:23:26.907456 | orchestrator | Sunday 23 November 2025 00:23:19 +0000 (0:00:00.514) 0:00:14.018 ******* 2025-11-23 00:23:26.907467 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.907478 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:26.907517 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:23:26.907529 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:23:26.907539 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:23:26.907550 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:23:26.907561 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:23:26.907572 | orchestrator | 2025-11-23 00:23:26.907589 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-23 00:23:26.907602 | orchestrator | Sunday 23 November 2025 00:23:20 +0000 (0:00:01.002) 0:00:15.021 ******* 2025-11-23 00:23:26.907614 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.907625 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.907636 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.907647 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.907657 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.907668 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.907679 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.907690 | orchestrator | 2025-11-23 00:23:26.907701 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-23 00:23:26.907712 | orchestrator | Sunday 23 November 2025 00:23:21 +0000 (0:00:00.977) 0:00:15.998 ******* 2025-11-23 00:23:26.907744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:23:26.907755 | orchestrator | 2025-11-23 00:23:26.907766 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-23 00:23:26.907778 | orchestrator | Sunday 23 November 2025 00:23:21 +0000 (0:00:00.290) 0:00:16.288 ******* 2025-11-23 00:23:26.907788 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:26.907799 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:23:26.907810 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:23:26.907821 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:23:26.907831 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:23:26.907842 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:26.907853 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:23:26.907863 | orchestrator | 2025-11-23 00:23:26.907874 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-23 00:23:26.907885 | orchestrator | Sunday 23 November 2025 00:23:23 +0000 (0:00:01.149) 0:00:17.438 ******* 2025-11-23 00:23:26.907896 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.907906 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.907917 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.907927 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.907938 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.907949 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.907959 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.907979 | orchestrator | 2025-11-23 00:23:26.907990 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-23 00:23:26.908007 | orchestrator | Sunday 23 November 2025 00:23:23 +0000 (0:00:00.176) 0:00:17.614 ******* 2025-11-23 00:23:26.908018 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.908029 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.908039 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.908050 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.908060 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.908071 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.908082 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.908092 | orchestrator | 2025-11-23 00:23:26.908103 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-23 00:23:26.908114 | orchestrator | Sunday 23 November 2025 00:23:23 +0000 (0:00:00.192) 0:00:17.807 ******* 2025-11-23 00:23:26.908125 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.908135 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.908146 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.908156 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.908167 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.908178 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.908189 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.908199 | orchestrator | 2025-11-23 00:23:26.908210 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-23 00:23:26.908221 | orchestrator | Sunday 23 November 2025 00:23:23 +0000 (0:00:00.200) 0:00:18.007 ******* 2025-11-23 00:23:26.908233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:23:26.908246 | orchestrator | 2025-11-23 00:23:26.908257 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-23 00:23:26.908267 | orchestrator | Sunday 23 November 2025 00:23:23 +0000 (0:00:00.233) 0:00:18.240 ******* 2025-11-23 00:23:26.908278 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.908289 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.908299 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.908310 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.908321 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.908331 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.908342 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.908353 | orchestrator | 2025-11-23 00:23:26.908364 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-23 00:23:26.908374 | orchestrator | Sunday 23 November 2025 00:23:24 +0000 (0:00:00.443) 0:00:18.684 ******* 2025-11-23 00:23:26.908386 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:23:26.908396 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:23:26.908407 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:23:26.908418 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:23:26.908428 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:23:26.908439 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:23:26.908450 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:23:26.908460 | orchestrator | 2025-11-23 00:23:26.908471 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-23 00:23:26.908520 | orchestrator | Sunday 23 November 2025 00:23:24 +0000 (0:00:00.225) 0:00:18.910 ******* 2025-11-23 00:23:26.908541 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.908560 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:26.908578 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:23:26.908594 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.908605 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.908616 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:23:26.908627 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.908638 | orchestrator | 2025-11-23 00:23:26.908649 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-23 00:23:26.908668 | orchestrator | Sunday 23 November 2025 00:23:25 +0000 (0:00:00.909) 0:00:19.819 ******* 2025-11-23 00:23:26.908679 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:23:26.908690 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:23:26.908701 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:23:26.908712 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.908723 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.908734 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.908745 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:23:26.908755 | orchestrator | 2025-11-23 00:23:26.908767 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-23 00:23:26.908778 | orchestrator | Sunday 23 November 2025 00:23:25 +0000 (0:00:00.511) 0:00:20.330 ******* 2025-11-23 00:23:26.908789 | orchestrator | ok: [testbed-manager] 2025-11-23 00:23:26.908800 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:23:26.908811 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:23:26.908821 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:23:26.908841 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.120666 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:24:07.120827 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:24:07.120857 | orchestrator | 2025-11-23 00:24:07.120880 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-23 00:24:07.120900 | orchestrator | Sunday 23 November 2025 00:23:26 +0000 (0:00:00.987) 0:00:21.318 ******* 2025-11-23 00:24:07.120919 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.120940 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.120959 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.120978 | orchestrator | changed: [testbed-manager] 2025-11-23 00:24:07.120998 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:24:07.121016 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:24:07.121033 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:24:07.121052 | orchestrator | 2025-11-23 00:24:07.121070 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-11-23 00:24:07.121089 | orchestrator | Sunday 23 November 2025 00:23:47 +0000 (0:00:20.513) 0:00:41.832 ******* 2025-11-23 00:24:07.121213 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.121234 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.121253 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.121273 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.121292 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.121312 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.121334 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.121356 | orchestrator | 2025-11-23 00:24:07.121376 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-11-23 00:24:07.121397 | orchestrator | Sunday 23 November 2025 00:23:47 +0000 (0:00:00.160) 0:00:41.992 ******* 2025-11-23 00:24:07.121418 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.121437 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.121457 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.121504 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.121597 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.121617 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.121636 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.121654 | orchestrator | 2025-11-23 00:24:07.121670 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-11-23 00:24:07.121687 | orchestrator | Sunday 23 November 2025 00:23:47 +0000 (0:00:00.189) 0:00:42.182 ******* 2025-11-23 00:24:07.121698 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.121707 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.121717 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.121747 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.121758 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.121767 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.121777 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.121787 | orchestrator | 2025-11-23 00:24:07.121796 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-11-23 00:24:07.121832 | orchestrator | Sunday 23 November 2025 00:23:47 +0000 (0:00:00.179) 0:00:42.361 ******* 2025-11-23 00:24:07.121844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:24:07.121857 | orchestrator | 2025-11-23 00:24:07.121867 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-11-23 00:24:07.121876 | orchestrator | Sunday 23 November 2025 00:23:48 +0000 (0:00:00.219) 0:00:42.581 ******* 2025-11-23 00:24:07.121886 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.121896 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.121912 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.121928 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.121943 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.121960 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.121975 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.121989 | orchestrator | 2025-11-23 00:24:07.122005 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-11-23 00:24:07.122099 | orchestrator | Sunday 23 November 2025 00:23:49 +0000 (0:00:01.221) 0:00:43.802 ******* 2025-11-23 00:24:07.122118 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:24:07.122134 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:24:07.122194 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:24:07.122209 | orchestrator | changed: [testbed-manager] 2025-11-23 00:24:07.122224 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:24:07.122237 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:24:07.122252 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:24:07.122266 | orchestrator | 2025-11-23 00:24:07.122280 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-11-23 00:24:07.122294 | orchestrator | Sunday 23 November 2025 00:23:50 +0000 (0:00:00.908) 0:00:44.711 ******* 2025-11-23 00:24:07.122307 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.122321 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.122334 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.122347 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.122361 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.122376 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.122391 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.122405 | orchestrator | 2025-11-23 00:24:07.122418 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-11-23 00:24:07.122433 | orchestrator | Sunday 23 November 2025 00:23:50 +0000 (0:00:00.685) 0:00:45.396 ******* 2025-11-23 00:24:07.122448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:24:07.122464 | orchestrator | 2025-11-23 00:24:07.122504 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-11-23 00:24:07.122519 | orchestrator | Sunday 23 November 2025 00:23:51 +0000 (0:00:00.231) 0:00:45.628 ******* 2025-11-23 00:24:07.122533 | orchestrator | changed: [testbed-manager] 2025-11-23 00:24:07.122546 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:24:07.122559 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:24:07.122572 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:24:07.122585 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:24:07.122598 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:24:07.122611 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:24:07.122624 | orchestrator | 2025-11-23 00:24:07.122662 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-11-23 00:24:07.122677 | orchestrator | Sunday 23 November 2025 00:23:52 +0000 (0:00:00.844) 0:00:46.473 ******* 2025-11-23 00:24:07.122691 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:24:07.122704 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:24:07.122734 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:24:07.122748 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:24:07.122761 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:24:07.122774 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:24:07.122786 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:24:07.122798 | orchestrator | 2025-11-23 00:24:07.122811 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2025-11-23 00:24:07.122825 | orchestrator | Sunday 23 November 2025 00:23:52 +0000 (0:00:00.182) 0:00:46.655 ******* 2025-11-23 00:24:07.122839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:24:07.122854 | orchestrator | 2025-11-23 00:24:07.122867 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2025-11-23 00:24:07.122880 | orchestrator | Sunday 23 November 2025 00:23:52 +0000 (0:00:00.242) 0:00:46.897 ******* 2025-11-23 00:24:07.122893 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.122907 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.122920 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.122933 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.122959 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.122973 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.122987 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.123001 | orchestrator | 2025-11-23 00:24:07.123015 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2025-11-23 00:24:07.123028 | orchestrator | Sunday 23 November 2025 00:23:53 +0000 (0:00:01.201) 0:00:48.099 ******* 2025-11-23 00:24:07.123042 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:24:07.123056 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:24:07.123069 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:24:07.123082 | orchestrator | changed: [testbed-manager] 2025-11-23 00:24:07.123095 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:24:07.123109 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:24:07.123122 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:24:07.123135 | orchestrator | 2025-11-23 00:24:07.123148 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-11-23 00:24:07.123162 | orchestrator | Sunday 23 November 2025 00:23:54 +0000 (0:00:00.937) 0:00:49.037 ******* 2025-11-23 00:24:07.123176 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:24:07.123189 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:24:07.123203 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:24:07.123217 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:24:07.123230 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:24:07.123243 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:24:07.123257 | orchestrator | changed: [testbed-manager] 2025-11-23 00:24:07.123270 | orchestrator | 2025-11-23 00:24:07.123283 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-11-23 00:24:07.123297 | orchestrator | Sunday 23 November 2025 00:24:04 +0000 (0:00:10.160) 0:00:59.197 ******* 2025-11-23 00:24:07.123310 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.123324 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.123337 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.123350 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.123363 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.123376 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.123389 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.123402 | orchestrator | 2025-11-23 00:24:07.123416 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-11-23 00:24:07.123430 | orchestrator | Sunday 23 November 2025 00:24:05 +0000 (0:00:00.860) 0:01:00.058 ******* 2025-11-23 00:24:07.123444 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.123457 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.123529 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.123544 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.123569 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.123584 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.123597 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.123610 | orchestrator | 2025-11-23 00:24:07.123624 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-11-23 00:24:07.123636 | orchestrator | Sunday 23 November 2025 00:24:06 +0000 (0:00:00.884) 0:01:00.942 ******* 2025-11-23 00:24:07.123649 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.123662 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.123675 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.123687 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.123701 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.123714 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.123727 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.123740 | orchestrator | 2025-11-23 00:24:07.123754 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-11-23 00:24:07.123769 | orchestrator | Sunday 23 November 2025 00:24:06 +0000 (0:00:00.174) 0:01:01.116 ******* 2025-11-23 00:24:07.123782 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:24:07.123795 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:24:07.123808 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:24:07.123822 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:24:07.123836 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:24:07.123850 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:24:07.123863 | orchestrator | ok: [testbed-manager] 2025-11-23 00:24:07.123877 | orchestrator | 2025-11-23 00:24:07.123890 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-11-23 00:24:07.123903 | orchestrator | Sunday 23 November 2025 00:24:06 +0000 (0:00:00.177) 0:01:01.293 ******* 2025-11-23 00:24:07.123918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:24:07.123934 | orchestrator | 2025-11-23 00:24:07.123961 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-11-23 00:26:21.730815 | orchestrator | Sunday 23 November 2025 00:24:07 +0000 (0:00:00.244) 0:01:01.538 ******* 2025-11-23 00:26:21.730936 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:21.730953 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:21.730964 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:21.730975 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:21.730985 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:21.730996 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:21.731007 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:21.731017 | orchestrator | 2025-11-23 00:26:21.731029 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-11-23 00:26:21.731040 | orchestrator | Sunday 23 November 2025 00:24:08 +0000 (0:00:01.527) 0:01:03.066 ******* 2025-11-23 00:26:21.731051 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:21.731062 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:26:21.731073 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:26:21.731083 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:26:21.731095 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:26:21.731105 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:26:21.731116 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:26:21.731126 | orchestrator | 2025-11-23 00:26:21.731137 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-11-23 00:26:21.731149 | orchestrator | Sunday 23 November 2025 00:24:09 +0000 (0:00:00.532) 0:01:03.598 ******* 2025-11-23 00:26:21.731159 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:21.731170 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:21.731181 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:21.731191 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:21.731201 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:21.731212 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:21.731264 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:21.731277 | orchestrator | 2025-11-23 00:26:21.731287 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-11-23 00:26:21.731298 | orchestrator | Sunday 23 November 2025 00:24:09 +0000 (0:00:00.178) 0:01:03.777 ******* 2025-11-23 00:26:21.731309 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:21.731320 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:21.731330 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:21.731343 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:21.731355 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:21.731366 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:21.731378 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:21.731390 | orchestrator | 2025-11-23 00:26:21.731403 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-11-23 00:26:21.731415 | orchestrator | Sunday 23 November 2025 00:24:10 +0000 (0:00:01.073) 0:01:04.851 ******* 2025-11-23 00:26:21.731428 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:21.731495 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:26:21.731507 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:26:21.731519 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:26:21.731532 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:26:21.731543 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:26:21.731556 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:26:21.731568 | orchestrator | 2025-11-23 00:26:21.731581 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-11-23 00:26:21.731593 | orchestrator | Sunday 23 November 2025 00:24:11 +0000 (0:00:01.516) 0:01:06.367 ******* 2025-11-23 00:26:21.731605 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:21.731617 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:21.731630 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:21.731641 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:21.731653 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:21.731665 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:21.731678 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:21.731690 | orchestrator | 2025-11-23 00:26:21.731702 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-11-23 00:26:21.731713 | orchestrator | Sunday 23 November 2025 00:24:14 +0000 (0:00:02.353) 0:01:08.721 ******* 2025-11-23 00:26:21.731723 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:21.731734 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:21.731744 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:21.731754 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:21.731765 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:21.731775 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:21.731786 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:21.731796 | orchestrator | 2025-11-23 00:26:21.731807 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-11-23 00:26:21.731817 | orchestrator | Sunday 23 November 2025 00:24:47 +0000 (0:00:32.718) 0:01:41.440 ******* 2025-11-23 00:26:21.731828 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:21.731838 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:26:21.731849 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:26:21.731859 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:26:21.731869 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:26:21.731880 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:26:21.731890 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:26:21.731900 | orchestrator | 2025-11-23 00:26:21.731911 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-11-23 00:26:21.731921 | orchestrator | Sunday 23 November 2025 00:26:07 +0000 (0:01:20.617) 0:03:02.057 ******* 2025-11-23 00:26:21.731931 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:21.731942 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:21.731952 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:21.731962 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:21.731973 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:21.731991 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:21.732002 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:21.732012 | orchestrator | 2025-11-23 00:26:21.732023 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-11-23 00:26:21.732033 | orchestrator | Sunday 23 November 2025 00:26:09 +0000 (0:00:01.761) 0:03:03.818 ******* 2025-11-23 00:26:21.732044 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:21.732054 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:21.732065 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:21.732075 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:21.732085 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:21.732095 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:21.732106 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:21.732116 | orchestrator | 2025-11-23 00:26:21.732127 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-11-23 00:26:21.732137 | orchestrator | Sunday 23 November 2025 00:26:19 +0000 (0:00:10.218) 0:03:14.037 ******* 2025-11-23 00:26:21.732176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-11-23 00:26:21.732201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-11-23 00:26:21.732216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-11-23 00:26:21.732229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-23 00:26:21.732240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-23 00:26:21.732258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-11-23 00:26:21.732270 | orchestrator | 2025-11-23 00:26:21.732281 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-11-23 00:26:21.732291 | orchestrator | Sunday 23 November 2025 00:26:19 +0000 (0:00:00.356) 0:03:14.393 ******* 2025-11-23 00:26:21.732302 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-23 00:26:21.732313 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-23 00:26:21.732330 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:26:21.732341 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-23 00:26:21.732351 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:26:21.732362 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:26:21.732372 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-23 00:26:21.732383 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:21.732393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-23 00:26:21.732408 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-23 00:26:21.732419 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-23 00:26:21.732445 | orchestrator | 2025-11-23 00:26:21.732457 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-11-23 00:26:21.732467 | orchestrator | Sunday 23 November 2025 00:26:21 +0000 (0:00:01.597) 0:03:15.991 ******* 2025-11-23 00:26:21.732478 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-23 00:26:21.732490 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-23 00:26:21.732500 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-23 00:26:21.732511 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-23 00:26:21.732522 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-23 00:26:21.732540 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-23 00:26:26.208885 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-23 00:26:26.208998 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-23 00:26:26.209009 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-23 00:26:26.209019 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-23 00:26:26.209028 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-23 00:26:26.209036 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-23 00:26:26.209043 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-23 00:26:26.209050 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-23 00:26:26.209058 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-23 00:26:26.209065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-23 00:26:26.209091 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-23 00:26:26.209099 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-23 00:26:26.209106 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-23 00:26:26.209113 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-23 00:26:26.209120 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-23 00:26:26.209127 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-23 00:26:26.209135 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-23 00:26:26.209163 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-23 00:26:26.209170 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-23 00:26:26.209177 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-23 00:26:26.209184 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-23 00:26:26.209193 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:26:26.209201 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-23 00:26:26.209208 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:26:26.209215 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-23 00:26:26.209222 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-23 00:26:26.209230 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:26:26.209237 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-23 00:26:26.209244 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-23 00:26:26.209251 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-23 00:26:26.209258 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-23 00:26:26.209265 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-23 00:26:26.209272 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-23 00:26:26.209279 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-23 00:26:26.209286 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-23 00:26:26.209293 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-23 00:26:26.209300 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-23 00:26:26.209307 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:26.209314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-23 00:26:26.209321 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-23 00:26:26.209328 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-23 00:26:26.209335 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-23 00:26:26.209342 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-23 00:26:26.209363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-23 00:26:26.209371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-23 00:26:26.209378 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-23 00:26:26.209385 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-23 00:26:26.209393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-23 00:26:26.209400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-23 00:26:26.209408 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-23 00:26:26.209416 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-23 00:26:26.209455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-23 00:26:26.209473 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-23 00:26:26.209482 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-23 00:26:26.209495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-23 00:26:26.209504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-23 00:26:26.209513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-23 00:26:26.209521 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-23 00:26:26.209528 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-23 00:26:26.209536 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-23 00:26:26.209545 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-23 00:26:26.209552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-23 00:26:26.209561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-23 00:26:26.209569 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-23 00:26:26.209577 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-23 00:26:26.209586 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-23 00:26:26.209594 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-23 00:26:26.209602 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-23 00:26:26.209610 | orchestrator | 2025-11-23 00:26:26.209620 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-11-23 00:26:26.209629 | orchestrator | Sunday 23 November 2025 00:26:25 +0000 (0:00:03.620) 0:03:19.611 ******* 2025-11-23 00:26:26.209637 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-23 00:26:26.209645 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-23 00:26:26.209654 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-23 00:26:26.209661 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-23 00:26:26.209670 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-23 00:26:26.209678 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-23 00:26:26.209686 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-23 00:26:26.209694 | orchestrator | 2025-11-23 00:26:26.209702 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-11-23 00:26:26.209711 | orchestrator | Sunday 23 November 2025 00:26:25 +0000 (0:00:00.487) 0:03:20.099 ******* 2025-11-23 00:26:26.209719 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:26.209727 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:26.209736 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:26:26.209744 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:26.209752 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:26:26.209761 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:26:26.209774 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:26.209783 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:26.209791 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-23 00:26:26.209799 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-23 00:26:26.209817 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-23 00:26:38.365099 | orchestrator | 2025-11-23 00:26:38.365229 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-11-23 00:26:38.365264 | orchestrator | Sunday 23 November 2025 00:26:26 +0000 (0:00:00.528) 0:03:20.627 ******* 2025-11-23 00:26:38.365286 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:38.365308 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:26:38.365329 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:38.365341 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:38.365352 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:26:38.365363 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:26:38.365374 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-23 00:26:38.365385 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:38.365397 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-23 00:26:38.365408 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-23 00:26:38.365419 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-23 00:26:38.365502 | orchestrator | 2025-11-23 00:26:38.365515 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-11-23 00:26:38.365526 | orchestrator | Sunday 23 November 2025 00:26:26 +0000 (0:00:00.447) 0:03:21.075 ******* 2025-11-23 00:26:38.365537 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-23 00:26:38.365550 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:26:38.365569 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-23 00:26:38.365587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-23 00:26:38.365605 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:26:38.365625 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:26:38.365646 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-23 00:26:38.365666 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:38.365684 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-23 00:26:38.365703 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-23 00:26:38.365722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-23 00:26:38.365740 | orchestrator | 2025-11-23 00:26:38.365759 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-11-23 00:26:38.365776 | orchestrator | Sunday 23 November 2025 00:26:27 +0000 (0:00:00.581) 0:03:21.657 ******* 2025-11-23 00:26:38.365796 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:26:38.365814 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:26:38.365832 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:26:38.365851 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:26:38.365905 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:26:38.365925 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:26:38.365944 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:38.365963 | orchestrator | 2025-11-23 00:26:38.365982 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-11-23 00:26:38.366001 | orchestrator | Sunday 23 November 2025 00:26:27 +0000 (0:00:00.258) 0:03:21.915 ******* 2025-11-23 00:26:38.366096 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:38.366118 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:38.366137 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:38.366155 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:38.366176 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:38.366188 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:38.366198 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:38.366209 | orchestrator | 2025-11-23 00:26:38.366220 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-11-23 00:26:38.366236 | orchestrator | Sunday 23 November 2025 00:26:33 +0000 (0:00:05.842) 0:03:27.757 ******* 2025-11-23 00:26:38.366254 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-11-23 00:26:38.366274 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-11-23 00:26:38.366293 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:26:38.366311 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:26:38.366330 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-11-23 00:26:38.366348 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:26:38.366365 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-11-23 00:26:38.366383 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-11-23 00:26:38.366401 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:26:38.366420 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-11-23 00:26:38.366462 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:26:38.366473 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:26:38.366484 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-11-23 00:26:38.366494 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:38.366504 | orchestrator | 2025-11-23 00:26:38.366515 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-11-23 00:26:38.366526 | orchestrator | Sunday 23 November 2025 00:26:33 +0000 (0:00:00.261) 0:03:28.019 ******* 2025-11-23 00:26:38.366537 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-11-23 00:26:38.366548 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-11-23 00:26:38.366558 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-11-23 00:26:38.366591 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-11-23 00:26:38.366603 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-11-23 00:26:38.366613 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-11-23 00:26:38.366624 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-11-23 00:26:38.366634 | orchestrator | 2025-11-23 00:26:38.366645 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-11-23 00:26:38.366656 | orchestrator | Sunday 23 November 2025 00:26:34 +0000 (0:00:00.890) 0:03:28.910 ******* 2025-11-23 00:26:38.366688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:26:38.366702 | orchestrator | 2025-11-23 00:26:38.366713 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-11-23 00:26:38.366747 | orchestrator | Sunday 23 November 2025 00:26:34 +0000 (0:00:00.337) 0:03:29.247 ******* 2025-11-23 00:26:38.366758 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:38.366769 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:38.366779 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:38.366790 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:38.366800 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:38.366811 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:38.366832 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:38.366843 | orchestrator | 2025-11-23 00:26:38.366859 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-11-23 00:26:38.366870 | orchestrator | Sunday 23 November 2025 00:26:36 +0000 (0:00:01.183) 0:03:30.431 ******* 2025-11-23 00:26:38.366881 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:38.366891 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:38.366902 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:38.366912 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:38.366922 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:38.366933 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:38.366947 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:38.366974 | orchestrator | 2025-11-23 00:26:38.366994 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-11-23 00:26:38.367012 | orchestrator | Sunday 23 November 2025 00:26:36 +0000 (0:00:00.481) 0:03:30.912 ******* 2025-11-23 00:26:38.367029 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:26:38.367046 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:26:38.367063 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:26:38.367080 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:26:38.367098 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:26:38.367116 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:38.367133 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:26:38.367151 | orchestrator | 2025-11-23 00:26:38.367170 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-11-23 00:26:38.367187 | orchestrator | Sunday 23 November 2025 00:26:36 +0000 (0:00:00.499) 0:03:31.411 ******* 2025-11-23 00:26:38.367198 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:38.367209 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:38.367220 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:38.367230 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:38.367241 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:38.367252 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:38.367262 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:38.367273 | orchestrator | 2025-11-23 00:26:38.367284 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-11-23 00:26:38.367295 | orchestrator | Sunday 23 November 2025 00:26:37 +0000 (0:00:00.502) 0:03:31.914 ******* 2025-11-23 00:26:38.367310 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1763856254.191282, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:38.367325 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1763856266.021161, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:38.367337 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1763856274.596842, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:38.367373 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1763856274.7260273, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498177 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1763856275.7592838, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498259 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1763856274.967185, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498268 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1763856264.830527, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498276 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498282 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498289 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498315 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498334 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498346 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498352 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:26:42.498359 | orchestrator | 2025-11-23 00:26:42.498367 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-11-23 00:26:42.498375 | orchestrator | Sunday 23 November 2025 00:26:38 +0000 (0:00:00.865) 0:03:32.780 ******* 2025-11-23 00:26:42.498381 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:26:42.498388 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:26:42.498394 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:42.498400 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:26:42.498406 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:26:42.498412 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:26:42.498418 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:26:42.498496 | orchestrator | 2025-11-23 00:26:42.498507 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-11-23 00:26:42.498513 | orchestrator | Sunday 23 November 2025 00:26:39 +0000 (0:00:00.954) 0:03:33.735 ******* 2025-11-23 00:26:42.498519 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:42.498525 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:26:42.498531 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:26:42.498537 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:26:42.498543 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:26:42.498549 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:26:42.498555 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:26:42.498561 | orchestrator | 2025-11-23 00:26:42.498567 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-11-23 00:26:42.498574 | orchestrator | Sunday 23 November 2025 00:26:40 +0000 (0:00:00.967) 0:03:34.702 ******* 2025-11-23 00:26:42.498586 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:26:42.498592 | orchestrator | changed: [testbed-manager] 2025-11-23 00:26:42.498598 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:26:42.498604 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:26:42.498610 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:26:42.498616 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:26:42.498622 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:26:42.498628 | orchestrator | 2025-11-23 00:26:42.498635 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-11-23 00:26:42.498641 | orchestrator | Sunday 23 November 2025 00:26:41 +0000 (0:00:01.006) 0:03:35.709 ******* 2025-11-23 00:26:42.498647 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:26:42.498653 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:26:42.498659 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:26:42.498665 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:26:42.498671 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:26:42.498677 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:26:42.498683 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:26:42.498689 | orchestrator | 2025-11-23 00:26:42.498695 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-11-23 00:26:42.498701 | orchestrator | Sunday 23 November 2025 00:26:41 +0000 (0:00:00.200) 0:03:35.910 ******* 2025-11-23 00:26:42.498707 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:26:42.498714 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:26:42.498720 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:26:42.498728 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:26:42.498738 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:26:42.498749 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:26:42.498760 | orchestrator | ok: [testbed-manager] 2025-11-23 00:26:42.498771 | orchestrator | 2025-11-23 00:26:42.498781 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-11-23 00:26:42.498791 | orchestrator | Sunday 23 November 2025 00:26:42 +0000 (0:00:00.634) 0:03:36.544 ******* 2025-11-23 00:26:42.498803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:26:42.498816 | orchestrator | 2025-11-23 00:26:42.498826 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-11-23 00:26:42.498846 | orchestrator | Sunday 23 November 2025 00:26:42 +0000 (0:00:00.372) 0:03:36.916 ******* 2025-11-23 00:27:59.507644 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.507767 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:27:59.507782 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:27:59.507790 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:27:59.507797 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:27:59.507804 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:27:59.507812 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:27:59.507819 | orchestrator | 2025-11-23 00:27:59.507828 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-11-23 00:27:59.507836 | orchestrator | Sunday 23 November 2025 00:26:50 +0000 (0:00:08.359) 0:03:45.275 ******* 2025-11-23 00:27:59.507844 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.507851 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:27:59.507859 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:27:59.507866 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:27:59.507874 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:27:59.507881 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:27:59.507888 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:27:59.507895 | orchestrator | 2025-11-23 00:27:59.507903 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-11-23 00:27:59.507910 | orchestrator | Sunday 23 November 2025 00:26:52 +0000 (0:00:01.251) 0:03:46.527 ******* 2025-11-23 00:27:59.507918 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:27:59.507948 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:27:59.507955 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:27:59.507963 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.507970 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:27:59.507977 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:27:59.507984 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:27:59.507991 | orchestrator | 2025-11-23 00:27:59.507998 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-11-23 00:27:59.508006 | orchestrator | Sunday 23 November 2025 00:26:53 +0000 (0:00:00.945) 0:03:47.472 ******* 2025-11-23 00:27:59.508013 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:27:59.508020 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:27:59.508027 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:27:59.508034 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:27:59.508041 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:27:59.508048 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:27:59.508055 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.508062 | orchestrator | 2025-11-23 00:27:59.508070 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-11-23 00:27:59.508078 | orchestrator | Sunday 23 November 2025 00:26:53 +0000 (0:00:00.237) 0:03:47.710 ******* 2025-11-23 00:27:59.508085 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:27:59.508092 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:27:59.508099 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:27:59.508106 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:27:59.508113 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:27:59.508120 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:27:59.508127 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.508134 | orchestrator | 2025-11-23 00:27:59.508142 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-11-23 00:27:59.508149 | orchestrator | Sunday 23 November 2025 00:26:53 +0000 (0:00:00.255) 0:03:47.966 ******* 2025-11-23 00:27:59.508156 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:27:59.508163 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:27:59.508171 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:27:59.508179 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:27:59.508187 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:27:59.508195 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:27:59.508203 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.508210 | orchestrator | 2025-11-23 00:27:59.508219 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-11-23 00:27:59.508227 | orchestrator | Sunday 23 November 2025 00:26:53 +0000 (0:00:00.248) 0:03:48.215 ******* 2025-11-23 00:27:59.508235 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.508243 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:27:59.508251 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:27:59.508259 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:27:59.508267 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:27:59.508275 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:27:59.508282 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:27:59.508291 | orchestrator | 2025-11-23 00:27:59.508299 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-11-23 00:27:59.508307 | orchestrator | Sunday 23 November 2025 00:26:59 +0000 (0:00:05.667) 0:03:53.882 ******* 2025-11-23 00:27:59.508316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:27:59.508326 | orchestrator | 2025-11-23 00:27:59.508335 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-11-23 00:27:59.508343 | orchestrator | Sunday 23 November 2025 00:26:59 +0000 (0:00:00.323) 0:03:54.205 ******* 2025-11-23 00:27:59.508352 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-11-23 00:27:59.508360 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-11-23 00:27:59.508374 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-11-23 00:27:59.508382 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:27:59.508390 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-11-23 00:27:59.508398 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-11-23 00:27:59.508435 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:27:59.508443 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-11-23 00:27:59.508451 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:27:59.508459 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-11-23 00:27:59.508467 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-11-23 00:27:59.508475 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-11-23 00:27:59.508483 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-11-23 00:27:59.508492 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:27:59.508499 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:27:59.508508 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-11-23 00:27:59.508531 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-11-23 00:27:59.508540 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:27:59.508563 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-11-23 00:27:59.508570 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-11-23 00:27:59.508578 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:27:59.508585 | orchestrator | 2025-11-23 00:27:59.508592 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-11-23 00:27:59.508599 | orchestrator | Sunday 23 November 2025 00:27:00 +0000 (0:00:00.294) 0:03:54.500 ******* 2025-11-23 00:27:59.508610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:27:59.508618 | orchestrator | 2025-11-23 00:27:59.508625 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-11-23 00:27:59.508633 | orchestrator | Sunday 23 November 2025 00:27:00 +0000 (0:00:00.339) 0:03:54.840 ******* 2025-11-23 00:27:59.508640 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-11-23 00:27:59.508647 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:27:59.508654 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-11-23 00:27:59.508661 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-11-23 00:27:59.508668 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:27:59.508676 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:27:59.508683 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-11-23 00:27:59.508690 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-11-23 00:27:59.508697 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:27:59.508704 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-11-23 00:27:59.508711 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:27:59.508718 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:27:59.508725 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-11-23 00:27:59.508732 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:27:59.508739 | orchestrator | 2025-11-23 00:27:59.508746 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-11-23 00:27:59.508753 | orchestrator | Sunday 23 November 2025 00:27:00 +0000 (0:00:00.279) 0:03:55.119 ******* 2025-11-23 00:27:59.508765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:27:59.508777 | orchestrator | 2025-11-23 00:27:59.508789 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-11-23 00:27:59.508808 | orchestrator | Sunday 23 November 2025 00:27:01 +0000 (0:00:00.334) 0:03:55.454 ******* 2025-11-23 00:27:59.508820 | orchestrator | changed: [testbed-manager] 2025-11-23 00:27:59.508831 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:27:59.508844 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:27:59.508856 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:27:59.508869 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:27:59.508882 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:27:59.508895 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:27:59.508908 | orchestrator | 2025-11-23 00:27:59.508916 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-11-23 00:27:59.508923 | orchestrator | Sunday 23 November 2025 00:27:35 +0000 (0:00:34.376) 0:04:29.831 ******* 2025-11-23 00:27:59.508930 | orchestrator | changed: [testbed-manager] 2025-11-23 00:27:59.508937 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:27:59.508944 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:27:59.508951 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:27:59.508958 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:27:59.508965 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:27:59.508972 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:27:59.508980 | orchestrator | 2025-11-23 00:27:59.508987 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-11-23 00:27:59.508994 | orchestrator | Sunday 23 November 2025 00:27:43 +0000 (0:00:08.387) 0:04:38.218 ******* 2025-11-23 00:27:59.509001 | orchestrator | changed: [testbed-manager] 2025-11-23 00:27:59.509008 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:27:59.509015 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:27:59.509022 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:27:59.509029 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:27:59.509036 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:27:59.509044 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:27:59.509051 | orchestrator | 2025-11-23 00:27:59.509058 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-11-23 00:27:59.509065 | orchestrator | Sunday 23 November 2025 00:27:51 +0000 (0:00:08.188) 0:04:46.407 ******* 2025-11-23 00:27:59.509072 | orchestrator | ok: [testbed-manager] 2025-11-23 00:27:59.509079 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:27:59.509086 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:27:59.509093 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:27:59.509101 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:27:59.509108 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:27:59.509115 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:27:59.509122 | orchestrator | 2025-11-23 00:27:59.509129 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-11-23 00:27:59.509136 | orchestrator | Sunday 23 November 2025 00:27:53 +0000 (0:00:01.722) 0:04:48.130 ******* 2025-11-23 00:27:59.509144 | orchestrator | changed: [testbed-manager] 2025-11-23 00:27:59.509151 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:27:59.509158 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:27:59.509165 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:27:59.509172 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:27:59.509179 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:27:59.509186 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:27:59.509193 | orchestrator | 2025-11-23 00:27:59.509206 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-11-23 00:28:08.882145 | orchestrator | Sunday 23 November 2025 00:27:59 +0000 (0:00:05.786) 0:04:53.917 ******* 2025-11-23 00:28:08.882275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:28:08.882294 | orchestrator | 2025-11-23 00:28:08.882325 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-11-23 00:28:08.882358 | orchestrator | Sunday 23 November 2025 00:27:59 +0000 (0:00:00.353) 0:04:54.271 ******* 2025-11-23 00:28:08.882369 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:28:08.882381 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:28:08.882392 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:28:08.882457 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:28:08.882470 | orchestrator | changed: [testbed-manager] 2025-11-23 00:28:08.882481 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:28:08.882492 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:28:08.882503 | orchestrator | 2025-11-23 00:28:08.882513 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-11-23 00:28:08.882524 | orchestrator | Sunday 23 November 2025 00:28:00 +0000 (0:00:00.600) 0:04:54.872 ******* 2025-11-23 00:28:08.882535 | orchestrator | ok: [testbed-manager] 2025-11-23 00:28:08.882548 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:28:08.882559 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:28:08.882570 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:28:08.882580 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:28:08.882591 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:28:08.882602 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:28:08.882612 | orchestrator | 2025-11-23 00:28:08.882625 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-11-23 00:28:08.882638 | orchestrator | Sunday 23 November 2025 00:28:01 +0000 (0:00:01.538) 0:04:56.411 ******* 2025-11-23 00:28:08.882651 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:28:08.882663 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:28:08.882675 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:28:08.882688 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:28:08.882700 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:28:08.882713 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:28:08.882725 | orchestrator | changed: [testbed-manager] 2025-11-23 00:28:08.882737 | orchestrator | 2025-11-23 00:28:08.882750 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-11-23 00:28:08.882762 | orchestrator | Sunday 23 November 2025 00:28:02 +0000 (0:00:00.707) 0:04:57.118 ******* 2025-11-23 00:28:08.882775 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:28:08.882787 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:28:08.882799 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:28:08.882813 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:28:08.882825 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:28:08.882837 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:28:08.882849 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:28:08.882862 | orchestrator | 2025-11-23 00:28:08.882874 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-11-23 00:28:08.882887 | orchestrator | Sunday 23 November 2025 00:28:02 +0000 (0:00:00.222) 0:04:57.340 ******* 2025-11-23 00:28:08.882899 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:28:08.882912 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:28:08.882924 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:28:08.882936 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:28:08.882948 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:28:08.882961 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:28:08.882973 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:28:08.882984 | orchestrator | 2025-11-23 00:28:08.882994 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-11-23 00:28:08.883005 | orchestrator | Sunday 23 November 2025 00:28:03 +0000 (0:00:00.300) 0:04:57.641 ******* 2025-11-23 00:28:08.883016 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:28:08.883027 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:28:08.883037 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:28:08.883048 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:28:08.883059 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:28:08.883069 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:28:08.883080 | orchestrator | ok: [testbed-manager] 2025-11-23 00:28:08.883098 | orchestrator | 2025-11-23 00:28:08.883109 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-11-23 00:28:08.883120 | orchestrator | Sunday 23 November 2025 00:28:03 +0000 (0:00:00.247) 0:04:57.888 ******* 2025-11-23 00:28:08.883131 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:28:08.883142 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:28:08.883152 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:28:08.883163 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:28:08.883174 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:28:08.883185 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:28:08.883195 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:28:08.883206 | orchestrator | 2025-11-23 00:28:08.883217 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-11-23 00:28:08.883228 | orchestrator | Sunday 23 November 2025 00:28:03 +0000 (0:00:00.201) 0:04:58.090 ******* 2025-11-23 00:28:08.883239 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:28:08.883250 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:28:08.883260 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:28:08.883271 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:28:08.883281 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:28:08.883292 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:28:08.883303 | orchestrator | ok: [testbed-manager] 2025-11-23 00:28:08.883313 | orchestrator | 2025-11-23 00:28:08.883324 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-11-23 00:28:08.883335 | orchestrator | Sunday 23 November 2025 00:28:03 +0000 (0:00:00.234) 0:04:58.325 ******* 2025-11-23 00:28:08.883346 | orchestrator | ok: [testbed-node-0] =>  2025-11-23 00:28:08.883357 | orchestrator |  docker_version: 5:27.5.1 2025-11-23 00:28:08.883367 | orchestrator | ok: [testbed-node-1] =>  2025-11-23 00:28:08.883378 | orchestrator |  docker_version: 5:27.5.1 2025-11-23 00:28:08.883388 | orchestrator | ok: [testbed-node-2] =>  2025-11-23 00:28:08.883461 | orchestrator |  docker_version: 5:27.5.1 2025-11-23 00:28:08.883483 | orchestrator | ok: [testbed-node-3] =>  2025-11-23 00:28:08.883500 | orchestrator |  docker_version: 5:27.5.1 2025-11-23 00:28:08.883530 | orchestrator | ok: [testbed-node-4] =>  2025-11-23 00:28:08.883542 | orchestrator |  docker_version: 5:27.5.1 2025-11-23 00:28:08.883552 | orchestrator | ok: [testbed-node-5] =>  2025-11-23 00:28:08.883563 | orchestrator |  docker_version: 5:27.5.1 2025-11-23 00:28:08.883574 | orchestrator | ok: [testbed-manager] =>  2025-11-23 00:28:08.883584 | orchestrator |  docker_version: 5:27.5.1 2025-11-23 00:28:08.883595 | orchestrator | 2025-11-23 00:28:08.883606 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-11-23 00:28:08.883617 | orchestrator | Sunday 23 November 2025 00:28:04 +0000 (0:00:00.220) 0:04:58.545 ******* 2025-11-23 00:28:08.883634 | orchestrator | ok: [testbed-node-0] =>  2025-11-23 00:28:08.883646 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-23 00:28:08.883657 | orchestrator | ok: [testbed-node-1] =>  2025-11-23 00:28:08.883667 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-23 00:28:08.883678 | orchestrator | ok: [testbed-node-2] =>  2025-11-23 00:28:08.883688 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-23 00:28:08.883698 | orchestrator | ok: [testbed-node-3] =>  2025-11-23 00:28:08.883709 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-23 00:28:08.883720 | orchestrator | ok: [testbed-node-4] =>  2025-11-23 00:28:08.883730 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-23 00:28:08.883740 | orchestrator | ok: [testbed-node-5] =>  2025-11-23 00:28:08.883751 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-23 00:28:08.883762 | orchestrator | ok: [testbed-manager] =>  2025-11-23 00:28:08.883772 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-23 00:28:08.883783 | orchestrator | 2025-11-23 00:28:08.883793 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-11-23 00:28:08.883804 | orchestrator | Sunday 23 November 2025 00:28:04 +0000 (0:00:00.265) 0:04:58.811 ******* 2025-11-23 00:28:08.883815 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:28:08.883834 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:28:08.883845 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:28:08.883855 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:28:08.883866 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:28:08.883877 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:28:08.883888 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:28:08.883899 | orchestrator | 2025-11-23 00:28:08.883910 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-11-23 00:28:08.883920 | orchestrator | Sunday 23 November 2025 00:28:04 +0000 (0:00:00.274) 0:04:59.085 ******* 2025-11-23 00:28:08.883931 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:28:08.883942 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:28:08.883953 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:28:08.883963 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:28:08.883974 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:28:08.883985 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:28:08.883996 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:28:08.884006 | orchestrator | 2025-11-23 00:28:08.884017 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-11-23 00:28:08.884028 | orchestrator | Sunday 23 November 2025 00:28:04 +0000 (0:00:00.213) 0:04:59.298 ******* 2025-11-23 00:28:08.884063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:28:08.884077 | orchestrator | 2025-11-23 00:28:08.884088 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-11-23 00:28:08.884099 | orchestrator | Sunday 23 November 2025 00:28:05 +0000 (0:00:00.342) 0:04:59.641 ******* 2025-11-23 00:28:08.884110 | orchestrator | ok: [testbed-manager] 2025-11-23 00:28:08.884121 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:28:08.884131 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:28:08.884142 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:28:08.884153 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:28:08.884164 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:28:08.884174 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:28:08.884185 | orchestrator | 2025-11-23 00:28:08.884196 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-11-23 00:28:08.884207 | orchestrator | Sunday 23 November 2025 00:28:05 +0000 (0:00:00.741) 0:05:00.382 ******* 2025-11-23 00:28:08.884218 | orchestrator | ok: [testbed-manager] 2025-11-23 00:28:08.884229 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:28:08.884239 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:28:08.884250 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:28:08.884260 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:28:08.884271 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:28:08.884282 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:28:08.884292 | orchestrator | 2025-11-23 00:28:08.884303 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-11-23 00:28:08.884315 | orchestrator | Sunday 23 November 2025 00:28:08 +0000 (0:00:02.597) 0:05:02.979 ******* 2025-11-23 00:28:08.884326 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-11-23 00:28:08.884337 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-11-23 00:28:08.884348 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-11-23 00:28:08.884359 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:28:08.884370 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-11-23 00:28:08.884380 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-11-23 00:28:08.884391 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-11-23 00:28:08.884424 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:28:08.884436 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-11-23 00:28:08.884447 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-11-23 00:28:08.884468 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-11-23 00:28:08.884479 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:28:08.884490 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-11-23 00:28:08.884501 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-11-23 00:28:08.884512 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-11-23 00:28:08.884523 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:28:08.884533 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-11-23 00:28:08.884551 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-11-23 00:29:06.803969 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-11-23 00:29:06.804065 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-11-23 00:29:06.804075 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-11-23 00:29:06.804083 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-11-23 00:29:06.804090 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:06.804098 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:06.804118 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-11-23 00:29:06.804125 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-11-23 00:29:06.804132 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-11-23 00:29:06.804138 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:06.804145 | orchestrator | 2025-11-23 00:29:06.804153 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-11-23 00:29:06.804161 | orchestrator | Sunday 23 November 2025 00:28:09 +0000 (0:00:00.711) 0:05:03.690 ******* 2025-11-23 00:29:06.804168 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804175 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804181 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804188 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804195 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804201 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804208 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804215 | orchestrator | 2025-11-23 00:29:06.804221 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-11-23 00:29:06.804228 | orchestrator | Sunday 23 November 2025 00:28:15 +0000 (0:00:06.633) 0:05:10.324 ******* 2025-11-23 00:29:06.804235 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804241 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804248 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804254 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804261 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804267 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804274 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804281 | orchestrator | 2025-11-23 00:29:06.804287 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-11-23 00:29:06.804294 | orchestrator | Sunday 23 November 2025 00:28:16 +0000 (0:00:00.943) 0:05:11.268 ******* 2025-11-23 00:29:06.804301 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804307 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804314 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804320 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804327 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804333 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804340 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804346 | orchestrator | 2025-11-23 00:29:06.804353 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-11-23 00:29:06.804360 | orchestrator | Sunday 23 November 2025 00:28:24 +0000 (0:00:08.104) 0:05:19.372 ******* 2025-11-23 00:29:06.804366 | orchestrator | changed: [testbed-manager] 2025-11-23 00:29:06.804373 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804420 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804457 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804465 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804472 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804478 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804485 | orchestrator | 2025-11-23 00:29:06.804492 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-11-23 00:29:06.804498 | orchestrator | Sunday 23 November 2025 00:28:28 +0000 (0:00:03.184) 0:05:22.556 ******* 2025-11-23 00:29:06.804505 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804513 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804521 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804528 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804536 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804543 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804551 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804558 | orchestrator | 2025-11-23 00:29:06.804566 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-11-23 00:29:06.804574 | orchestrator | Sunday 23 November 2025 00:28:29 +0000 (0:00:01.286) 0:05:23.842 ******* 2025-11-23 00:29:06.804582 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804589 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804597 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804604 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804612 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804620 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804627 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804635 | orchestrator | 2025-11-23 00:29:06.804643 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-11-23 00:29:06.804650 | orchestrator | Sunday 23 November 2025 00:28:30 +0000 (0:00:01.182) 0:05:25.025 ******* 2025-11-23 00:29:06.804658 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:06.804666 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:06.804673 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:06.804681 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:06.804688 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:06.804696 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:06.804703 | orchestrator | changed: [testbed-manager] 2025-11-23 00:29:06.804711 | orchestrator | 2025-11-23 00:29:06.804718 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-11-23 00:29:06.804726 | orchestrator | Sunday 23 November 2025 00:28:31 +0000 (0:00:00.888) 0:05:25.913 ******* 2025-11-23 00:29:06.804733 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804741 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804748 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804755 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804763 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804770 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804778 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804786 | orchestrator | 2025-11-23 00:29:06.804793 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-11-23 00:29:06.804801 | orchestrator | Sunday 23 November 2025 00:28:40 +0000 (0:00:08.992) 0:05:34.906 ******* 2025-11-23 00:29:06.804822 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804830 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804838 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804846 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804853 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804861 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804869 | orchestrator | changed: [testbed-manager] 2025-11-23 00:29:06.804875 | orchestrator | 2025-11-23 00:29:06.804882 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-11-23 00:29:06.804889 | orchestrator | Sunday 23 November 2025 00:28:41 +0000 (0:00:00.780) 0:05:35.687 ******* 2025-11-23 00:29:06.804901 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804908 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804914 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.804921 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804928 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804934 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.804941 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.804947 | orchestrator | 2025-11-23 00:29:06.804954 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-11-23 00:29:06.804961 | orchestrator | Sunday 23 November 2025 00:28:50 +0000 (0:00:09.090) 0:05:44.777 ******* 2025-11-23 00:29:06.804967 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.804974 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.804980 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.804987 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.804994 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.805000 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.805007 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.805013 | orchestrator | 2025-11-23 00:29:06.805020 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-11-23 00:29:06.805026 | orchestrator | Sunday 23 November 2025 00:29:00 +0000 (0:00:10.470) 0:05:55.248 ******* 2025-11-23 00:29:06.805033 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-11-23 00:29:06.805040 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-11-23 00:29:06.805047 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-11-23 00:29:06.805053 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-11-23 00:29:06.805060 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-11-23 00:29:06.805066 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-11-23 00:29:06.805073 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-11-23 00:29:06.805080 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-11-23 00:29:06.805086 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-11-23 00:29:06.805093 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-11-23 00:29:06.805099 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-11-23 00:29:06.805106 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-11-23 00:29:06.805113 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-11-23 00:29:06.805119 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-11-23 00:29:06.805126 | orchestrator | 2025-11-23 00:29:06.805133 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-11-23 00:29:06.805170 | orchestrator | Sunday 23 November 2025 00:29:01 +0000 (0:00:01.016) 0:05:56.265 ******* 2025-11-23 00:29:06.805177 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:06.805184 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:06.805191 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:06.805197 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:06.805204 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:06.805211 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:06.805217 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:06.805224 | orchestrator | 2025-11-23 00:29:06.805230 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-11-23 00:29:06.805237 | orchestrator | Sunday 23 November 2025 00:29:02 +0000 (0:00:00.439) 0:05:56.704 ******* 2025-11-23 00:29:06.805244 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:06.805250 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:06.805257 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:06.805263 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:06.805270 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:06.805276 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:06.805283 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:06.805289 | orchestrator | 2025-11-23 00:29:06.805301 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-11-23 00:29:06.805308 | orchestrator | Sunday 23 November 2025 00:29:06 +0000 (0:00:03.753) 0:06:00.458 ******* 2025-11-23 00:29:06.805315 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:06.805321 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:06.805328 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:06.805335 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:06.805341 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:06.805347 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:06.805354 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:06.805360 | orchestrator | 2025-11-23 00:29:06.805368 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-11-23 00:29:06.805375 | orchestrator | Sunday 23 November 2025 00:29:06 +0000 (0:00:00.505) 0:06:00.963 ******* 2025-11-23 00:29:06.805404 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-11-23 00:29:06.805412 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-11-23 00:29:06.805418 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:06.805425 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-11-23 00:29:06.805431 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-11-23 00:29:06.805438 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-11-23 00:29:06.805445 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-11-23 00:29:06.805451 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:06.805458 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-11-23 00:29:06.805469 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-11-23 00:29:23.684853 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:23.684977 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-11-23 00:29:23.685002 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-11-23 00:29:23.685021 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:23.685041 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-11-23 00:29:23.685060 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-11-23 00:29:23.685101 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:23.685121 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:23.685142 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-11-23 00:29:23.685161 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-11-23 00:29:23.685180 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:23.685193 | orchestrator | 2025-11-23 00:29:23.685205 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-11-23 00:29:23.685217 | orchestrator | Sunday 23 November 2025 00:29:07 +0000 (0:00:00.473) 0:06:01.437 ******* 2025-11-23 00:29:23.685228 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:23.685239 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:23.685249 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:23.685260 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:23.685271 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:23.685281 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:23.685292 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:23.685303 | orchestrator | 2025-11-23 00:29:23.685314 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-11-23 00:29:23.685325 | orchestrator | Sunday 23 November 2025 00:29:07 +0000 (0:00:00.426) 0:06:01.864 ******* 2025-11-23 00:29:23.685336 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:23.685347 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:23.685358 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:23.685368 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:23.685416 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:23.685455 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:23.685468 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:23.685480 | orchestrator | 2025-11-23 00:29:23.685492 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-11-23 00:29:23.685504 | orchestrator | Sunday 23 November 2025 00:29:07 +0000 (0:00:00.407) 0:06:02.271 ******* 2025-11-23 00:29:23.685517 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:23.685529 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:23.685541 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:23.685553 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:23.685565 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:23.685576 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:23.685588 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:23.685601 | orchestrator | 2025-11-23 00:29:23.685614 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-11-23 00:29:23.685626 | orchestrator | Sunday 23 November 2025 00:29:08 +0000 (0:00:00.564) 0:06:02.835 ******* 2025-11-23 00:29:23.685639 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:23.685650 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:23.685661 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:23.685671 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:23.685682 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:23.685693 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:23.685703 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.685714 | orchestrator | 2025-11-23 00:29:23.685725 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-11-23 00:29:23.685736 | orchestrator | Sunday 23 November 2025 00:29:10 +0000 (0:00:02.117) 0:06:04.953 ******* 2025-11-23 00:29:23.685747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:29:23.685760 | orchestrator | 2025-11-23 00:29:23.685772 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-11-23 00:29:23.685782 | orchestrator | Sunday 23 November 2025 00:29:11 +0000 (0:00:00.742) 0:06:05.695 ******* 2025-11-23 00:29:23.685793 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:23.685804 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:23.685815 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:23.685825 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:23.685836 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:23.685846 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:23.685857 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.685868 | orchestrator | 2025-11-23 00:29:23.685878 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-11-23 00:29:23.685889 | orchestrator | Sunday 23 November 2025 00:29:11 +0000 (0:00:00.672) 0:06:06.368 ******* 2025-11-23 00:29:23.685900 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:23.685910 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:23.685921 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:23.685932 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:23.685942 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:23.685953 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:23.685963 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.685975 | orchestrator | 2025-11-23 00:29:23.685985 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-11-23 00:29:23.685996 | orchestrator | Sunday 23 November 2025 00:29:12 +0000 (0:00:00.828) 0:06:07.196 ******* 2025-11-23 00:29:23.686007 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:23.686083 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:23.686097 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:23.686108 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.686119 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:23.686130 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:23.686150 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:23.686161 | orchestrator | 2025-11-23 00:29:23.686172 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-11-23 00:29:23.686202 | orchestrator | Sunday 23 November 2025 00:29:13 +0000 (0:00:01.149) 0:06:08.346 ******* 2025-11-23 00:29:23.686214 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:23.686225 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:23.686236 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:23.686247 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:23.686258 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:23.686269 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:23.686280 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:23.686290 | orchestrator | 2025-11-23 00:29:23.686301 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-11-23 00:29:23.686313 | orchestrator | Sunday 23 November 2025 00:29:15 +0000 (0:00:01.217) 0:06:09.563 ******* 2025-11-23 00:29:23.686324 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:23.686334 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:23.686345 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:23.686356 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:23.686366 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.686402 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:23.686414 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:23.686424 | orchestrator | 2025-11-23 00:29:23.686435 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-11-23 00:29:23.686446 | orchestrator | Sunday 23 November 2025 00:29:16 +0000 (0:00:01.153) 0:06:10.717 ******* 2025-11-23 00:29:23.686457 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:23.686467 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:23.686478 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:23.686488 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:23.686499 | orchestrator | changed: [testbed-manager] 2025-11-23 00:29:23.686509 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:23.686520 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:23.686530 | orchestrator | 2025-11-23 00:29:23.686541 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-11-23 00:29:23.686551 | orchestrator | Sunday 23 November 2025 00:29:17 +0000 (0:00:01.150) 0:06:11.868 ******* 2025-11-23 00:29:23.686562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:29:23.686573 | orchestrator | 2025-11-23 00:29:23.686584 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-11-23 00:29:23.686595 | orchestrator | Sunday 23 November 2025 00:29:18 +0000 (0:00:00.821) 0:06:12.690 ******* 2025-11-23 00:29:23.686606 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:23.686616 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:23.686627 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:23.686637 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:23.686648 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:23.686658 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:23.686669 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.686679 | orchestrator | 2025-11-23 00:29:23.686690 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-11-23 00:29:23.686701 | orchestrator | Sunday 23 November 2025 00:29:19 +0000 (0:00:01.238) 0:06:13.928 ******* 2025-11-23 00:29:23.686712 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:23.686722 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:23.686733 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:23.686743 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:23.686753 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:23.686764 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.686775 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:23.686785 | orchestrator | 2025-11-23 00:29:23.686796 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-11-23 00:29:23.686819 | orchestrator | Sunday 23 November 2025 00:29:20 +0000 (0:00:01.036) 0:06:14.965 ******* 2025-11-23 00:29:23.686830 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:23.686841 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:23.686852 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:23.686862 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:23.686872 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:23.686883 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.686894 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:23.686904 | orchestrator | 2025-11-23 00:29:23.686915 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-11-23 00:29:23.686926 | orchestrator | Sunday 23 November 2025 00:29:21 +0000 (0:00:01.091) 0:06:16.056 ******* 2025-11-23 00:29:23.686936 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:23.686947 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:23.686957 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:23.686968 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:23.686978 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:23.686989 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:23.686999 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:23.687010 | orchestrator | 2025-11-23 00:29:23.687021 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-11-23 00:29:23.687031 | orchestrator | Sunday 23 November 2025 00:29:22 +0000 (0:00:01.024) 0:06:17.081 ******* 2025-11-23 00:29:23.687042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:29:23.687053 | orchestrator | 2025-11-23 00:29:23.687064 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-23 00:29:23.687074 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.753) 0:06:17.834 ******* 2025-11-23 00:29:23.687085 | orchestrator | 2025-11-23 00:29:23.687096 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-23 00:29:23.687106 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.035) 0:06:17.869 ******* 2025-11-23 00:29:23.687117 | orchestrator | 2025-11-23 00:29:23.687128 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-23 00:29:23.687138 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.038) 0:06:17.908 ******* 2025-11-23 00:29:23.687149 | orchestrator | 2025-11-23 00:29:23.687160 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-23 00:29:23.687170 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.034) 0:06:17.943 ******* 2025-11-23 00:29:23.687181 | orchestrator | 2025-11-23 00:29:23.687199 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-23 00:29:46.586225 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.034) 0:06:17.977 ******* 2025-11-23 00:29:46.586341 | orchestrator | 2025-11-23 00:29:46.586358 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-23 00:29:46.586430 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.038) 0:06:18.016 ******* 2025-11-23 00:29:46.586442 | orchestrator | 2025-11-23 00:29:46.586470 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-23 00:29:46.586482 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.041) 0:06:18.058 ******* 2025-11-23 00:29:46.586493 | orchestrator | 2025-11-23 00:29:46.586504 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-23 00:29:46.586515 | orchestrator | Sunday 23 November 2025 00:29:23 +0000 (0:00:00.035) 0:06:18.094 ******* 2025-11-23 00:29:46.586525 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:46.586538 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:46.586548 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:46.586559 | orchestrator | 2025-11-23 00:29:46.586570 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-11-23 00:29:46.586605 | orchestrator | Sunday 23 November 2025 00:29:24 +0000 (0:00:01.143) 0:06:19.237 ******* 2025-11-23 00:29:46.586616 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:46.586628 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:46.586639 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:46.586649 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:46.586660 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:46.586670 | orchestrator | changed: [testbed-manager] 2025-11-23 00:29:46.586681 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:46.586691 | orchestrator | 2025-11-23 00:29:46.586702 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2025-11-23 00:29:46.586713 | orchestrator | Sunday 23 November 2025 00:29:26 +0000 (0:00:01.359) 0:06:20.597 ******* 2025-11-23 00:29:46.586723 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:46.586734 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:46.586745 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:46.586756 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:46.586768 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:46.586780 | orchestrator | changed: [testbed-manager] 2025-11-23 00:29:46.586792 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:46.586804 | orchestrator | 2025-11-23 00:29:46.586817 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-11-23 00:29:46.586830 | orchestrator | Sunday 23 November 2025 00:29:27 +0000 (0:00:01.159) 0:06:21.757 ******* 2025-11-23 00:29:46.586842 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:46.586854 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:46.586865 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:46.586878 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:46.586890 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:46.586903 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:46.586915 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:46.586926 | orchestrator | 2025-11-23 00:29:46.586939 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-11-23 00:29:46.586951 | orchestrator | Sunday 23 November 2025 00:29:29 +0000 (0:00:02.123) 0:06:23.880 ******* 2025-11-23 00:29:46.586963 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:46.586975 | orchestrator | 2025-11-23 00:29:46.586987 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-11-23 00:29:46.586999 | orchestrator | Sunday 23 November 2025 00:29:29 +0000 (0:00:00.072) 0:06:23.953 ******* 2025-11-23 00:29:46.587012 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:46.587024 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:46.587036 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:46.587048 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:46.587060 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:46.587073 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:46.587085 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:46.587098 | orchestrator | 2025-11-23 00:29:46.587110 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-11-23 00:29:46.587121 | orchestrator | Sunday 23 November 2025 00:29:30 +0000 (0:00:00.817) 0:06:24.770 ******* 2025-11-23 00:29:46.587132 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:46.587142 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:46.587153 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:46.587163 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:46.587174 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:46.587184 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:46.587195 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:46.587206 | orchestrator | 2025-11-23 00:29:46.587217 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-11-23 00:29:46.587228 | orchestrator | Sunday 23 November 2025 00:29:30 +0000 (0:00:00.558) 0:06:25.329 ******* 2025-11-23 00:29:46.587239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:29:46.587262 | orchestrator | 2025-11-23 00:29:46.587273 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-11-23 00:29:46.587284 | orchestrator | Sunday 23 November 2025 00:29:31 +0000 (0:00:00.776) 0:06:26.105 ******* 2025-11-23 00:29:46.587294 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:46.587305 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:46.587315 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:46.587326 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:46.587336 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:46.587347 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:46.587357 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:46.587388 | orchestrator | 2025-11-23 00:29:46.587400 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-11-23 00:29:46.587411 | orchestrator | Sunday 23 November 2025 00:29:32 +0000 (0:00:00.731) 0:06:26.837 ******* 2025-11-23 00:29:46.587422 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-11-23 00:29:46.587433 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-11-23 00:29:46.587462 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-11-23 00:29:46.587473 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-11-23 00:29:46.587484 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-11-23 00:29:46.587494 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-11-23 00:29:46.587511 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-11-23 00:29:46.587522 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-11-23 00:29:46.587533 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-11-23 00:29:46.587543 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-11-23 00:29:46.587554 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-11-23 00:29:46.587564 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-11-23 00:29:46.587575 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-11-23 00:29:46.587585 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-11-23 00:29:46.587596 | orchestrator | 2025-11-23 00:29:46.587607 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-11-23 00:29:46.587617 | orchestrator | Sunday 23 November 2025 00:29:34 +0000 (0:00:02.322) 0:06:29.159 ******* 2025-11-23 00:29:46.587628 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:46.587639 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:46.587649 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:46.587660 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:46.587670 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:46.587681 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:46.587691 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:46.587702 | orchestrator | 2025-11-23 00:29:46.587713 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-11-23 00:29:46.587723 | orchestrator | Sunday 23 November 2025 00:29:35 +0000 (0:00:00.418) 0:06:29.578 ******* 2025-11-23 00:29:46.587735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:29:46.587748 | orchestrator | 2025-11-23 00:29:46.587759 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-11-23 00:29:46.587769 | orchestrator | Sunday 23 November 2025 00:29:35 +0000 (0:00:00.669) 0:06:30.248 ******* 2025-11-23 00:29:46.587780 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:46.587791 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:46.587801 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:46.587819 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:46.587830 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:46.587841 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:46.587851 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:46.587862 | orchestrator | 2025-11-23 00:29:46.587873 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-11-23 00:29:46.587883 | orchestrator | Sunday 23 November 2025 00:29:36 +0000 (0:00:00.802) 0:06:31.050 ******* 2025-11-23 00:29:46.587894 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:46.587904 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:46.587915 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:46.587925 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:46.587936 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:46.587946 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:46.587957 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:46.587967 | orchestrator | 2025-11-23 00:29:46.587978 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-11-23 00:29:46.587989 | orchestrator | Sunday 23 November 2025 00:29:37 +0000 (0:00:00.667) 0:06:31.717 ******* 2025-11-23 00:29:46.588000 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:46.588010 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:46.588021 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:46.588031 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:46.588042 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:46.588053 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:46.588063 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:46.588073 | orchestrator | 2025-11-23 00:29:46.588084 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-11-23 00:29:46.588095 | orchestrator | Sunday 23 November 2025 00:29:37 +0000 (0:00:00.398) 0:06:32.116 ******* 2025-11-23 00:29:46.588106 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:29:46.588116 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:29:46.588127 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:29:46.588137 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:46.588148 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:29:46.588158 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:29:46.588169 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:29:46.588179 | orchestrator | 2025-11-23 00:29:46.588190 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-11-23 00:29:46.588201 | orchestrator | Sunday 23 November 2025 00:29:38 +0000 (0:00:01.232) 0:06:33.348 ******* 2025-11-23 00:29:46.588211 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:29:46.588222 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:29:46.588232 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:29:46.588243 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:29:46.588253 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:29:46.588264 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:29:46.588274 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:29:46.588285 | orchestrator | 2025-11-23 00:29:46.588296 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-11-23 00:29:46.588306 | orchestrator | Sunday 23 November 2025 00:29:39 +0000 (0:00:00.404) 0:06:33.753 ******* 2025-11-23 00:29:46.588317 | orchestrator | ok: [testbed-manager] 2025-11-23 00:29:46.588327 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:29:46.588338 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:29:46.588348 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:29:46.588359 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:29:46.588388 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:29:46.588399 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:29:46.588410 | orchestrator | 2025-11-23 00:29:46.588428 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-11-23 00:30:16.416450 | orchestrator | Sunday 23 November 2025 00:29:46 +0000 (0:00:07.243) 0:06:40.997 ******* 2025-11-23 00:30:16.416546 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:16.416579 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:16.416588 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:16.416595 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:16.416603 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.416611 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:16.416619 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:16.416626 | orchestrator | 2025-11-23 00:30:16.416635 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-11-23 00:30:16.416642 | orchestrator | Sunday 23 November 2025 00:29:47 +0000 (0:00:01.146) 0:06:42.144 ******* 2025-11-23 00:30:16.416650 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:16.416657 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:16.416664 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.416671 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:16.416678 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:16.416685 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:16.416692 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:16.416699 | orchestrator | 2025-11-23 00:30:16.416706 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-11-23 00:30:16.416713 | orchestrator | Sunday 23 November 2025 00:29:49 +0000 (0:00:01.645) 0:06:43.789 ******* 2025-11-23 00:30:16.416721 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:16.416728 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:16.416735 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:16.416742 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:16.416749 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.416756 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:16.416763 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:16.416770 | orchestrator | 2025-11-23 00:30:16.416778 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-23 00:30:16.416785 | orchestrator | Sunday 23 November 2025 00:29:50 +0000 (0:00:01.365) 0:06:45.154 ******* 2025-11-23 00:30:16.416792 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.416799 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.416806 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.416814 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.416821 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.416827 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.416835 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.416842 | orchestrator | 2025-11-23 00:30:16.416849 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-23 00:30:16.416856 | orchestrator | Sunday 23 November 2025 00:29:51 +0000 (0:00:00.834) 0:06:45.989 ******* 2025-11-23 00:30:16.416863 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:30:16.416870 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:30:16.416877 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:30:16.416884 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:30:16.416891 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:30:16.416899 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:30:16.416906 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:30:16.416913 | orchestrator | 2025-11-23 00:30:16.416920 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-11-23 00:30:16.416927 | orchestrator | Sunday 23 November 2025 00:29:52 +0000 (0:00:00.701) 0:06:46.690 ******* 2025-11-23 00:30:16.416935 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:30:16.416944 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:30:16.416951 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:30:16.416959 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:30:16.416967 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:30:16.416975 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:30:16.416983 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:30:16.416991 | orchestrator | 2025-11-23 00:30:16.417014 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-11-23 00:30:16.417022 | orchestrator | Sunday 23 November 2025 00:29:52 +0000 (0:00:00.475) 0:06:47.166 ******* 2025-11-23 00:30:16.417036 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.417045 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.417053 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.417061 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.417069 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.417077 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.417085 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.417092 | orchestrator | 2025-11-23 00:30:16.417101 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-11-23 00:30:16.417109 | orchestrator | Sunday 23 November 2025 00:29:53 +0000 (0:00:00.503) 0:06:47.670 ******* 2025-11-23 00:30:16.417117 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.417125 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.417133 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.417141 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.417149 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.417157 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.417165 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.417173 | orchestrator | 2025-11-23 00:30:16.417181 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-11-23 00:30:16.417190 | orchestrator | Sunday 23 November 2025 00:29:53 +0000 (0:00:00.526) 0:06:48.197 ******* 2025-11-23 00:30:16.417198 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.417206 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.417214 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.417222 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.417230 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.417238 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.417246 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.417254 | orchestrator | 2025-11-23 00:30:16.417262 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-11-23 00:30:16.417270 | orchestrator | Sunday 23 November 2025 00:29:54 +0000 (0:00:00.431) 0:06:48.628 ******* 2025-11-23 00:30:16.417279 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.417287 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.417295 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.417303 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.417310 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.417317 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.417324 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.417331 | orchestrator | 2025-11-23 00:30:16.417338 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-11-23 00:30:16.417395 | orchestrator | Sunday 23 November 2025 00:29:59 +0000 (0:00:05.315) 0:06:53.944 ******* 2025-11-23 00:30:16.417406 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:30:16.417414 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:30:16.417422 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:30:16.417429 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:30:16.417437 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:30:16.417450 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:30:16.417458 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:30:16.417465 | orchestrator | 2025-11-23 00:30:16.417473 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-11-23 00:30:16.417481 | orchestrator | Sunday 23 November 2025 00:29:59 +0000 (0:00:00.439) 0:06:54.383 ******* 2025-11-23 00:30:16.417490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:30:16.417500 | orchestrator | 2025-11-23 00:30:16.417508 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-11-23 00:30:16.417516 | orchestrator | Sunday 23 November 2025 00:30:00 +0000 (0:00:00.784) 0:06:55.168 ******* 2025-11-23 00:30:16.417523 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.417539 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.417546 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.417554 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.417561 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.417573 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.417585 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.417596 | orchestrator | 2025-11-23 00:30:16.417608 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-11-23 00:30:16.417620 | orchestrator | Sunday 23 November 2025 00:30:02 +0000 (0:00:01.680) 0:06:56.849 ******* 2025-11-23 00:30:16.417633 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.417645 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.417657 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.417667 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.417675 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.417682 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.417689 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.417696 | orchestrator | 2025-11-23 00:30:16.417703 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-11-23 00:30:16.417710 | orchestrator | Sunday 23 November 2025 00:30:03 +0000 (0:00:01.045) 0:06:57.894 ******* 2025-11-23 00:30:16.417718 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:16.417725 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:16.417732 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:16.417739 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:16.417747 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:16.417759 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:16.417770 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:16.417782 | orchestrator | 2025-11-23 00:30:16.417794 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-11-23 00:30:16.417806 | orchestrator | Sunday 23 November 2025 00:30:04 +0000 (0:00:00.691) 0:06:58.586 ******* 2025-11-23 00:30:16.417819 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-23 00:30:16.417833 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-23 00:30:16.417845 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-23 00:30:16.417856 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-23 00:30:16.417863 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-23 00:30:16.417870 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-23 00:30:16.417877 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-23 00:30:16.417885 | orchestrator | 2025-11-23 00:30:16.417892 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-11-23 00:30:16.417899 | orchestrator | Sunday 23 November 2025 00:30:05 +0000 (0:00:01.576) 0:07:00.162 ******* 2025-11-23 00:30:16.417907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:30:16.417915 | orchestrator | 2025-11-23 00:30:16.417922 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-11-23 00:30:16.417929 | orchestrator | Sunday 23 November 2025 00:30:06 +0000 (0:00:00.684) 0:07:00.847 ******* 2025-11-23 00:30:16.417936 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:16.417944 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:16.417958 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:16.417965 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:16.417972 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:16.417979 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:16.417986 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:16.417993 | orchestrator | 2025-11-23 00:30:16.418001 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-11-23 00:30:16.418072 | orchestrator | Sunday 23 November 2025 00:30:16 +0000 (0:00:09.978) 0:07:10.826 ******* 2025-11-23 00:30:43.831739 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:43.831850 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:43.831867 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:43.831879 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:43.831890 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:43.831901 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:43.831913 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:43.831924 | orchestrator | 2025-11-23 00:30:43.831954 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-11-23 00:30:43.831968 | orchestrator | Sunday 23 November 2025 00:30:18 +0000 (0:00:01.631) 0:07:12.457 ******* 2025-11-23 00:30:43.831979 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:43.831990 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:43.832000 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:43.832011 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:43.832022 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:43.832032 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:43.832043 | orchestrator | 2025-11-23 00:30:43.832054 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-11-23 00:30:43.832065 | orchestrator | Sunday 23 November 2025 00:30:19 +0000 (0:00:01.200) 0:07:13.658 ******* 2025-11-23 00:30:43.832076 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.832087 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.832097 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.832108 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.832119 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.832129 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.832140 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.832151 | orchestrator | 2025-11-23 00:30:43.832162 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-11-23 00:30:43.832174 | orchestrator | 2025-11-23 00:30:43.832185 | orchestrator | TASK [Include hardening role] ************************************************** 2025-11-23 00:30:43.832196 | orchestrator | Sunday 23 November 2025 00:30:20 +0000 (0:00:01.224) 0:07:14.883 ******* 2025-11-23 00:30:43.832206 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:30:43.832217 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:30:43.832228 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:30:43.832238 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:30:43.832249 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:30:43.832260 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:30:43.832271 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:30:43.832281 | orchestrator | 2025-11-23 00:30:43.832292 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-11-23 00:30:43.832303 | orchestrator | 2025-11-23 00:30:43.832314 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-11-23 00:30:43.832325 | orchestrator | Sunday 23 November 2025 00:30:20 +0000 (0:00:00.410) 0:07:15.294 ******* 2025-11-23 00:30:43.832335 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.832346 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.832382 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.832393 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.832404 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.832415 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.832426 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.832460 | orchestrator | 2025-11-23 00:30:43.832471 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-11-23 00:30:43.832482 | orchestrator | Sunday 23 November 2025 00:30:22 +0000 (0:00:01.181) 0:07:16.475 ******* 2025-11-23 00:30:43.832493 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:43.832504 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:43.832515 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:43.832525 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:43.832536 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:43.832546 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:43.832558 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:43.832569 | orchestrator | 2025-11-23 00:30:43.832580 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-11-23 00:30:43.832591 | orchestrator | Sunday 23 November 2025 00:30:23 +0000 (0:00:01.255) 0:07:17.730 ******* 2025-11-23 00:30:43.832602 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:30:43.832613 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:30:43.832623 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:30:43.832634 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:30:43.832645 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:30:43.832655 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:30:43.832666 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:30:43.832677 | orchestrator | 2025-11-23 00:30:43.832688 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-11-23 00:30:43.832699 | orchestrator | Sunday 23 November 2025 00:30:23 +0000 (0:00:00.551) 0:07:18.281 ******* 2025-11-23 00:30:43.832710 | orchestrator | included: osism.services.smartd for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:30:43.832722 | orchestrator | 2025-11-23 00:30:43.832733 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-11-23 00:30:43.832744 | orchestrator | Sunday 23 November 2025 00:30:24 +0000 (0:00:00.697) 0:07:18.979 ******* 2025-11-23 00:30:43.832756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:30:43.832770 | orchestrator | 2025-11-23 00:30:43.832781 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-11-23 00:30:43.832792 | orchestrator | Sunday 23 November 2025 00:30:25 +0000 (0:00:00.691) 0:07:19.670 ******* 2025-11-23 00:30:43.832803 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.832813 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.832824 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.832835 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.832846 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.832857 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.832867 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.832878 | orchestrator | 2025-11-23 00:30:43.832889 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-11-23 00:30:43.832917 | orchestrator | Sunday 23 November 2025 00:30:34 +0000 (0:00:09.030) 0:07:28.700 ******* 2025-11-23 00:30:43.832929 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.832940 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.832950 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.832961 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.832972 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.832988 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.832999 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.833010 | orchestrator | 2025-11-23 00:30:43.833021 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-11-23 00:30:43.833032 | orchestrator | Sunday 23 November 2025 00:30:34 +0000 (0:00:00.721) 0:07:29.422 ******* 2025-11-23 00:30:43.833043 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.833061 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.833072 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.833082 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.833093 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.833104 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.833114 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.833125 | orchestrator | 2025-11-23 00:30:43.833136 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-11-23 00:30:43.833146 | orchestrator | Sunday 23 November 2025 00:30:36 +0000 (0:00:01.199) 0:07:30.621 ******* 2025-11-23 00:30:43.833157 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.833168 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.833178 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.833189 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.833200 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.833210 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.833221 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.833232 | orchestrator | 2025-11-23 00:30:43.833243 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-11-23 00:30:43.833253 | orchestrator | Sunday 23 November 2025 00:30:37 +0000 (0:00:01.596) 0:07:32.218 ******* 2025-11-23 00:30:43.833264 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.833275 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.833285 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.833296 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.833307 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.833318 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.833328 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.833339 | orchestrator | 2025-11-23 00:30:43.833366 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-11-23 00:30:43.833378 | orchestrator | Sunday 23 November 2025 00:30:38 +0000 (0:00:01.044) 0:07:33.262 ******* 2025-11-23 00:30:43.833388 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.833399 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.833410 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.833421 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.833431 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.833442 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.833452 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.833463 | orchestrator | 2025-11-23 00:30:43.833473 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-11-23 00:30:43.833484 | orchestrator | 2025-11-23 00:30:43.833495 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-11-23 00:30:43.833506 | orchestrator | Sunday 23 November 2025 00:30:39 +0000 (0:00:01.011) 0:07:34.274 ******* 2025-11-23 00:30:43.833517 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:30:43.833528 | orchestrator | 2025-11-23 00:30:43.833539 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-23 00:30:43.833550 | orchestrator | Sunday 23 November 2025 00:30:40 +0000 (0:00:00.778) 0:07:35.053 ******* 2025-11-23 00:30:43.833561 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:43.833571 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:43.833582 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:43.833593 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:43.833603 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:43.833614 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:43.833625 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:43.833635 | orchestrator | 2025-11-23 00:30:43.833646 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-23 00:30:43.833657 | orchestrator | Sunday 23 November 2025 00:30:41 +0000 (0:00:00.707) 0:07:35.761 ******* 2025-11-23 00:30:43.833668 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:43.833685 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:43.833696 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:43.833707 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:43.833717 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:43.833728 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:43.833738 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:43.833749 | orchestrator | 2025-11-23 00:30:43.833760 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-11-23 00:30:43.833770 | orchestrator | Sunday 23 November 2025 00:30:42 +0000 (0:00:00.979) 0:07:36.740 ******* 2025-11-23 00:30:43.833781 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-23 00:30:43.833792 | orchestrator | 2025-11-23 00:30:43.833803 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-23 00:30:43.833814 | orchestrator | Sunday 23 November 2025 00:30:43 +0000 (0:00:00.808) 0:07:37.549 ******* 2025-11-23 00:30:43.833824 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:30:43.833835 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:30:43.833846 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:30:43.833857 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:30:43.833868 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:30:43.833878 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:30:43.833889 | orchestrator | ok: [testbed-manager] 2025-11-23 00:30:43.833899 | orchestrator | 2025-11-23 00:30:43.833910 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-23 00:30:43.833928 | orchestrator | Sunday 23 November 2025 00:30:43 +0000 (0:00:00.691) 0:07:38.240 ******* 2025-11-23 00:30:45.022514 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:30:45.022608 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:30:45.022622 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:30:45.022632 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:30:45.022641 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:30:45.022649 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:30:45.022677 | orchestrator | changed: [testbed-manager] 2025-11-23 00:30:45.022687 | orchestrator | 2025-11-23 00:30:45.022696 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:30:45.022707 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-11-23 00:30:45.022718 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-23 00:30:45.022727 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-23 00:30:45.022735 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-23 00:30:45.022744 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-23 00:30:45.022753 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-23 00:30:45.022762 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-23 00:30:45.022770 | orchestrator | 2025-11-23 00:30:45.022779 | orchestrator | 2025-11-23 00:30:45.022788 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:30:45.022797 | orchestrator | Sunday 23 November 2025 00:30:44 +0000 (0:00:00.935) 0:07:39.176 ******* 2025-11-23 00:30:45.022805 | orchestrator | =============================================================================== 2025-11-23 00:30:45.022834 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.62s 2025-11-23 00:30:45.022843 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.38s 2025-11-23 00:30:45.022852 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.72s 2025-11-23 00:30:45.022860 | orchestrator | osism.commons.repository : Update package cache ------------------------ 20.51s 2025-11-23 00:30:45.022869 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.47s 2025-11-23 00:30:45.022878 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.22s 2025-11-23 00:30:45.022887 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.16s 2025-11-23 00:30:45.022896 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.98s 2025-11-23 00:30:45.022904 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.09s 2025-11-23 00:30:45.022913 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.03s 2025-11-23 00:30:45.022921 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.99s 2025-11-23 00:30:45.022930 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.39s 2025-11-23 00:30:45.022938 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.36s 2025-11-23 00:30:45.022947 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.19s 2025-11-23 00:30:45.022955 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.10s 2025-11-23 00:30:45.022964 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.24s 2025-11-23 00:30:45.022972 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.63s 2025-11-23 00:30:45.022981 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.84s 2025-11-23 00:30:45.022989 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.79s 2025-11-23 00:30:45.022998 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.67s 2025-11-23 00:30:45.194658 | orchestrator | + osism apply fail2ban 2025-11-23 00:30:57.497699 | orchestrator | 2025-11-23 00:30:57 | INFO  | Task 392d12ef-e09f-4af5-8a76-b9055718e291 (fail2ban) was prepared for execution. 2025-11-23 00:30:57.497840 | orchestrator | 2025-11-23 00:30:57 | INFO  | It takes a moment until task 392d12ef-e09f-4af5-8a76-b9055718e291 (fail2ban) has been started and output is visible here. 2025-11-23 00:31:17.897657 | orchestrator | 2025-11-23 00:31:17.897770 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2025-11-23 00:31:17.897788 | orchestrator | 2025-11-23 00:31:17.897800 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2025-11-23 00:31:17.897812 | orchestrator | Sunday 23 November 2025 00:31:01 +0000 (0:00:00.226) 0:00:00.226 ******* 2025-11-23 00:31:17.897825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:31:17.897838 | orchestrator | 2025-11-23 00:31:17.897850 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2025-11-23 00:31:17.897860 | orchestrator | Sunday 23 November 2025 00:31:02 +0000 (0:00:00.977) 0:00:01.203 ******* 2025-11-23 00:31:17.897890 | orchestrator | changed: [testbed-manager] 2025-11-23 00:31:17.897902 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:31:17.897914 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:31:17.897925 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:31:17.897935 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:31:17.897946 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:31:17.897956 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:31:17.897967 | orchestrator | 2025-11-23 00:31:17.897978 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2025-11-23 00:31:17.898011 | orchestrator | Sunday 23 November 2025 00:31:13 +0000 (0:00:11.227) 0:00:12.431 ******* 2025-11-23 00:31:17.898086 | orchestrator | changed: [testbed-manager] 2025-11-23 00:31:17.898097 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:31:17.898108 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:31:17.898119 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:31:17.898129 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:31:17.898140 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:31:17.898150 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:31:17.898161 | orchestrator | 2025-11-23 00:31:17.898172 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2025-11-23 00:31:17.898182 | orchestrator | Sunday 23 November 2025 00:31:14 +0000 (0:00:01.389) 0:00:13.821 ******* 2025-11-23 00:31:17.898193 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:17.898207 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:17.898219 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:17.898231 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:17.898243 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:17.898254 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:17.898266 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:17.898278 | orchestrator | 2025-11-23 00:31:17.898290 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2025-11-23 00:31:17.898302 | orchestrator | Sunday 23 November 2025 00:31:16 +0000 (0:00:01.249) 0:00:15.071 ******* 2025-11-23 00:31:17.898315 | orchestrator | changed: [testbed-manager] 2025-11-23 00:31:17.898326 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:31:17.898359 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:31:17.898372 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:31:17.898384 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:31:17.898396 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:31:17.898408 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:31:17.898420 | orchestrator | 2025-11-23 00:31:17.898433 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:31:17.898446 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:31:17.898459 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:31:17.898472 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:31:17.898484 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:31:17.898497 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:31:17.898509 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:31:17.898522 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:31:17.898534 | orchestrator | 2025-11-23 00:31:17.898547 | orchestrator | 2025-11-23 00:31:17.898560 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:31:17.898572 | orchestrator | Sunday 23 November 2025 00:31:17 +0000 (0:00:01.422) 0:00:16.493 ******* 2025-11-23 00:31:17.898583 | orchestrator | =============================================================================== 2025-11-23 00:31:17.898594 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.23s 2025-11-23 00:31:17.898604 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.42s 2025-11-23 00:31:17.898615 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.39s 2025-11-23 00:31:17.898635 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.25s 2025-11-23 00:31:17.898645 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.98s 2025-11-23 00:31:18.075296 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-23 00:31:18.075448 | orchestrator | + osism apply network 2025-11-23 00:31:30.000573 | orchestrator | 2025-11-23 00:31:29 | INFO  | Task f1785a33-b791-4ad4-b556-799e8042e61b (network) was prepared for execution. 2025-11-23 00:31:30.000688 | orchestrator | 2025-11-23 00:31:29 | INFO  | It takes a moment until task f1785a33-b791-4ad4-b556-799e8042e61b (network) has been started and output is visible here. 2025-11-23 00:31:55.755453 | orchestrator | 2025-11-23 00:31:55.755600 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-11-23 00:31:55.755626 | orchestrator | 2025-11-23 00:31:55.755645 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-11-23 00:31:55.755663 | orchestrator | Sunday 23 November 2025 00:31:33 +0000 (0:00:00.238) 0:00:00.238 ******* 2025-11-23 00:31:55.755681 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.755701 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:55.755719 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:55.755739 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:55.755756 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:55.755775 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:55.755793 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:55.755812 | orchestrator | 2025-11-23 00:31:55.755831 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-11-23 00:31:55.755844 | orchestrator | Sunday 23 November 2025 00:31:34 +0000 (0:00:00.591) 0:00:00.830 ******* 2025-11-23 00:31:55.755856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:31:55.755870 | orchestrator | 2025-11-23 00:31:55.755883 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-11-23 00:31:55.755894 | orchestrator | Sunday 23 November 2025 00:31:35 +0000 (0:00:01.039) 0:00:01.870 ******* 2025-11-23 00:31:55.755905 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.755916 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:55.755926 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:55.755937 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:55.755947 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:55.755958 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:55.755969 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:55.755979 | orchestrator | 2025-11-23 00:31:55.755990 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-11-23 00:31:55.756007 | orchestrator | Sunday 23 November 2025 00:31:37 +0000 (0:00:01.861) 0:00:03.732 ******* 2025-11-23 00:31:55.756025 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.756044 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:55.756061 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:55.756079 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:55.756123 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:55.756146 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:55.756164 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:55.756183 | orchestrator | 2025-11-23 00:31:55.756202 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-11-23 00:31:55.756219 | orchestrator | Sunday 23 November 2025 00:31:38 +0000 (0:00:01.539) 0:00:05.272 ******* 2025-11-23 00:31:55.756238 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-11-23 00:31:55.756258 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-11-23 00:31:55.756277 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-11-23 00:31:55.756297 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-11-23 00:31:55.756315 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-11-23 00:31:55.756390 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-11-23 00:31:55.756404 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-11-23 00:31:55.756415 | orchestrator | 2025-11-23 00:31:55.756426 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-11-23 00:31:55.756437 | orchestrator | Sunday 23 November 2025 00:31:39 +0000 (0:00:00.856) 0:00:06.129 ******* 2025-11-23 00:31:55.756456 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-23 00:31:55.756475 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-23 00:31:55.756494 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:31:55.756512 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-23 00:31:55.756531 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:31:55.756549 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-23 00:31:55.756567 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-23 00:31:55.756587 | orchestrator | 2025-11-23 00:31:55.756606 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-11-23 00:31:55.756620 | orchestrator | Sunday 23 November 2025 00:31:42 +0000 (0:00:02.732) 0:00:08.861 ******* 2025-11-23 00:31:55.756631 | orchestrator | changed: [testbed-manager] 2025-11-23 00:31:55.756642 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:31:55.756653 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:31:55.756663 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:31:55.756674 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:31:55.756685 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:31:55.756696 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:31:55.756706 | orchestrator | 2025-11-23 00:31:55.756717 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-11-23 00:31:55.756728 | orchestrator | Sunday 23 November 2025 00:31:43 +0000 (0:00:01.445) 0:00:10.307 ******* 2025-11-23 00:31:55.756739 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:31:55.756750 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:31:55.756760 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-23 00:31:55.756771 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-23 00:31:55.756781 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-23 00:31:55.756792 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-23 00:31:55.756803 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-23 00:31:55.756813 | orchestrator | 2025-11-23 00:31:55.756824 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-11-23 00:31:55.756835 | orchestrator | Sunday 23 November 2025 00:31:45 +0000 (0:00:01.450) 0:00:11.757 ******* 2025-11-23 00:31:55.756846 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.756856 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:55.756867 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:55.756878 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:55.756888 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:55.756899 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:55.756910 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:55.756920 | orchestrator | 2025-11-23 00:31:55.756932 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-11-23 00:31:55.756977 | orchestrator | Sunday 23 November 2025 00:31:46 +0000 (0:00:00.967) 0:00:12.725 ******* 2025-11-23 00:31:55.756999 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:31:55.757016 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:31:55.757034 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:31:55.757052 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:31:55.757070 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:31:55.757089 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:31:55.757105 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:31:55.757123 | orchestrator | 2025-11-23 00:31:55.757140 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-11-23 00:31:55.757158 | orchestrator | Sunday 23 November 2025 00:31:46 +0000 (0:00:00.564) 0:00:13.289 ******* 2025-11-23 00:31:55.757203 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.757222 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:55.757240 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:55.757257 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:55.757275 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:55.757294 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:55.757309 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:55.757326 | orchestrator | 2025-11-23 00:31:55.757393 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-11-23 00:31:55.757413 | orchestrator | Sunday 23 November 2025 00:31:48 +0000 (0:00:01.992) 0:00:15.282 ******* 2025-11-23 00:31:55.757432 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:31:55.757450 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:31:55.757469 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:31:55.757486 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:31:55.757503 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:31:55.757520 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:31:55.757540 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-11-23 00:31:55.757559 | orchestrator | 2025-11-23 00:31:55.757578 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-11-23 00:31:55.757595 | orchestrator | Sunday 23 November 2025 00:31:49 +0000 (0:00:00.820) 0:00:16.102 ******* 2025-11-23 00:31:55.757615 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.757633 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:31:55.757651 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:31:55.757670 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:31:55.757688 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:31:55.757706 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:31:55.757724 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:31:55.757743 | orchestrator | 2025-11-23 00:31:55.757761 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-11-23 00:31:55.757780 | orchestrator | Sunday 23 November 2025 00:31:51 +0000 (0:00:01.519) 0:00:17.622 ******* 2025-11-23 00:31:55.757799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:31:55.757821 | orchestrator | 2025-11-23 00:31:55.757840 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-23 00:31:55.757858 | orchestrator | Sunday 23 November 2025 00:31:52 +0000 (0:00:01.078) 0:00:18.700 ******* 2025-11-23 00:31:55.757876 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.757895 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:55.757913 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:55.757931 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:55.757950 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:55.757968 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:55.757986 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:55.758005 | orchestrator | 2025-11-23 00:31:55.758107 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-11-23 00:31:55.758127 | orchestrator | Sunday 23 November 2025 00:31:54 +0000 (0:00:01.901) 0:00:20.601 ******* 2025-11-23 00:31:55.758146 | orchestrator | ok: [testbed-manager] 2025-11-23 00:31:55.758165 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:31:55.758184 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:31:55.758203 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:31:55.758221 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:31:55.758240 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:31:55.758437 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:31:55.758462 | orchestrator | 2025-11-23 00:31:55.758482 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-23 00:31:55.758502 | orchestrator | Sunday 23 November 2025 00:31:54 +0000 (0:00:00.591) 0:00:21.193 ******* 2025-11-23 00:31:55.758522 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-11-23 00:31:55.758562 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-11-23 00:31:55.758583 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-11-23 00:31:55.758601 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-11-23 00:31:55.758619 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-23 00:31:55.758635 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-11-23 00:31:55.758652 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-11-23 00:31:55.758669 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-23 00:31:55.758685 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-23 00:31:55.758702 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-23 00:31:55.758719 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-11-23 00:31:55.758735 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-23 00:31:55.758752 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-23 00:31:55.758769 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-23 00:31:55.758786 | orchestrator | 2025-11-23 00:31:55.758822 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-11-23 00:32:10.600944 | orchestrator | Sunday 23 November 2025 00:31:55 +0000 (0:00:01.035) 0:00:22.228 ******* 2025-11-23 00:32:10.601052 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:32:10.601070 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:32:10.601084 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:32:10.601096 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:32:10.601107 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:32:10.601119 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:32:10.601131 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:32:10.601143 | orchestrator | 2025-11-23 00:32:10.601164 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-11-23 00:32:10.601176 | orchestrator | Sunday 23 November 2025 00:31:56 +0000 (0:00:00.587) 0:00:22.815 ******* 2025-11-23 00:32:10.601189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-11-23 00:32:10.601204 | orchestrator | 2025-11-23 00:32:10.601216 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-11-23 00:32:10.601228 | orchestrator | Sunday 23 November 2025 00:32:00 +0000 (0:00:03.949) 0:00:26.764 ******* 2025-11-23 00:32:10.601241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601268 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601389 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601509 | orchestrator | 2025-11-23 00:32:10.601522 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-11-23 00:32:10.601535 | orchestrator | Sunday 23 November 2025 00:32:05 +0000 (0:00:05.115) 0:00:31.880 ******* 2025-11-23 00:32:10.601548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601561 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601632 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-23 00:32:10.601671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:10.601717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:15.567824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-23 00:32:15.567928 | orchestrator | 2025-11-23 00:32:15.567962 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-11-23 00:32:15.567977 | orchestrator | Sunday 23 November 2025 00:32:10 +0000 (0:00:05.186) 0:00:37.066 ******* 2025-11-23 00:32:15.567990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:32:15.568002 | orchestrator | 2025-11-23 00:32:15.568013 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-23 00:32:15.568024 | orchestrator | Sunday 23 November 2025 00:32:11 +0000 (0:00:01.077) 0:00:38.144 ******* 2025-11-23 00:32:15.568056 | orchestrator | ok: [testbed-manager] 2025-11-23 00:32:15.568069 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:32:15.568080 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:32:15.568090 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:32:15.568101 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:32:15.568111 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:32:15.568122 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:32:15.568133 | orchestrator | 2025-11-23 00:32:15.568144 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-23 00:32:15.568155 | orchestrator | Sunday 23 November 2025 00:32:12 +0000 (0:00:00.968) 0:00:39.113 ******* 2025-11-23 00:32:15.568165 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-23 00:32:15.568177 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-23 00:32:15.568187 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-23 00:32:15.568198 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-23 00:32:15.568209 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-23 00:32:15.568220 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-23 00:32:15.568230 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-23 00:32:15.568241 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-23 00:32:15.568252 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:32:15.568263 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-23 00:32:15.568274 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-23 00:32:15.568284 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-23 00:32:15.568295 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-23 00:32:15.568305 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:32:15.568316 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-23 00:32:15.568327 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-23 00:32:15.568338 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-23 00:32:15.568381 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-23 00:32:15.568403 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:32:15.568424 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-23 00:32:15.568444 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-23 00:32:15.568458 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-23 00:32:15.568471 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-23 00:32:15.568483 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:32:15.568496 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-23 00:32:15.568508 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-23 00:32:15.568520 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-23 00:32:15.568531 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-23 00:32:15.568544 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:32:15.568556 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:32:15.568568 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-23 00:32:15.568581 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-23 00:32:15.568601 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-23 00:32:15.568613 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-23 00:32:15.568626 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:32:15.568638 | orchestrator | 2025-11-23 00:32:15.568650 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-11-23 00:32:15.568681 | orchestrator | Sunday 23 November 2025 00:32:14 +0000 (0:00:01.619) 0:00:40.732 ******* 2025-11-23 00:32:15.568695 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:32:15.568707 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:32:15.568719 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:32:15.568730 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:32:15.568740 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:32:15.568750 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:32:15.568767 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:32:15.568778 | orchestrator | 2025-11-23 00:32:15.568789 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-11-23 00:32:15.568800 | orchestrator | Sunday 23 November 2025 00:32:14 +0000 (0:00:00.538) 0:00:41.271 ******* 2025-11-23 00:32:15.568810 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:32:15.568821 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:32:15.568831 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:32:15.568842 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:32:15.568852 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:32:15.568863 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:32:15.568873 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:32:15.568884 | orchestrator | 2025-11-23 00:32:15.568894 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:32:15.568906 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 00:32:15.568918 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 00:32:15.568929 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 00:32:15.568940 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 00:32:15.568950 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 00:32:15.568961 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 00:32:15.568972 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 00:32:15.568982 | orchestrator | 2025-11-23 00:32:15.568993 | orchestrator | 2025-11-23 00:32:15.569004 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:32:15.569014 | orchestrator | Sunday 23 November 2025 00:32:15 +0000 (0:00:00.563) 0:00:41.834 ******* 2025-11-23 00:32:15.569025 | orchestrator | =============================================================================== 2025-11-23 00:32:15.569036 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.19s 2025-11-23 00:32:15.569046 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.12s 2025-11-23 00:32:15.569057 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.95s 2025-11-23 00:32:15.569068 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.73s 2025-11-23 00:32:15.569085 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.99s 2025-11-23 00:32:15.569096 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.90s 2025-11-23 00:32:15.569107 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.86s 2025-11-23 00:32:15.569117 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.62s 2025-11-23 00:32:15.569127 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.54s 2025-11-23 00:32:15.569138 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.52s 2025-11-23 00:32:15.569149 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.45s 2025-11-23 00:32:15.569159 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2025-11-23 00:32:15.569170 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.08s 2025-11-23 00:32:15.569180 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.08s 2025-11-23 00:32:15.569191 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.04s 2025-11-23 00:32:15.569201 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.04s 2025-11-23 00:32:15.569212 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-11-23 00:32:15.569222 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.97s 2025-11-23 00:32:15.569233 | orchestrator | osism.commons.network : Create required directories --------------------- 0.86s 2025-11-23 00:32:15.569244 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.82s 2025-11-23 00:32:15.748991 | orchestrator | + osism apply wireguard 2025-11-23 00:32:27.518063 | orchestrator | 2025-11-23 00:32:27 | INFO  | Task b392fb3b-c1af-44c4-9a42-1dda522f7cf1 (wireguard) was prepared for execution. 2025-11-23 00:32:27.518158 | orchestrator | 2025-11-23 00:32:27 | INFO  | It takes a moment until task b392fb3b-c1af-44c4-9a42-1dda522f7cf1 (wireguard) has been started and output is visible here. 2025-11-23 00:32:44.300734 | orchestrator | 2025-11-23 00:32:44.300814 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-11-23 00:32:44.300822 | orchestrator | 2025-11-23 00:32:44.300827 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-11-23 00:32:44.300832 | orchestrator | Sunday 23 November 2025 00:32:31 +0000 (0:00:00.162) 0:00:00.162 ******* 2025-11-23 00:32:44.300837 | orchestrator | ok: [testbed-manager] 2025-11-23 00:32:44.300843 | orchestrator | 2025-11-23 00:32:44.300847 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-11-23 00:32:44.300853 | orchestrator | Sunday 23 November 2025 00:32:32 +0000 (0:00:01.195) 0:00:01.357 ******* 2025-11-23 00:32:44.300857 | orchestrator | changed: [testbed-manager] 2025-11-23 00:32:44.300862 | orchestrator | 2025-11-23 00:32:44.300867 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-11-23 00:32:44.300871 | orchestrator | Sunday 23 November 2025 00:32:37 +0000 (0:00:05.301) 0:00:06.659 ******* 2025-11-23 00:32:44.300875 | orchestrator | changed: [testbed-manager] 2025-11-23 00:32:44.300880 | orchestrator | 2025-11-23 00:32:44.300884 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-11-23 00:32:44.300888 | orchestrator | Sunday 23 November 2025 00:32:38 +0000 (0:00:00.506) 0:00:07.165 ******* 2025-11-23 00:32:44.300892 | orchestrator | changed: [testbed-manager] 2025-11-23 00:32:44.300897 | orchestrator | 2025-11-23 00:32:44.300901 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-11-23 00:32:44.300905 | orchestrator | Sunday 23 November 2025 00:32:38 +0000 (0:00:00.381) 0:00:07.546 ******* 2025-11-23 00:32:44.300909 | orchestrator | ok: [testbed-manager] 2025-11-23 00:32:44.300914 | orchestrator | 2025-11-23 00:32:44.300918 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-11-23 00:32:44.300922 | orchestrator | Sunday 23 November 2025 00:32:39 +0000 (0:00:00.564) 0:00:08.110 ******* 2025-11-23 00:32:44.300946 | orchestrator | ok: [testbed-manager] 2025-11-23 00:32:44.300950 | orchestrator | 2025-11-23 00:32:44.300955 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-11-23 00:32:44.300959 | orchestrator | Sunday 23 November 2025 00:32:39 +0000 (0:00:00.362) 0:00:08.473 ******* 2025-11-23 00:32:44.300964 | orchestrator | ok: [testbed-manager] 2025-11-23 00:32:44.300968 | orchestrator | 2025-11-23 00:32:44.300972 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-11-23 00:32:44.300976 | orchestrator | Sunday 23 November 2025 00:32:39 +0000 (0:00:00.378) 0:00:08.851 ******* 2025-11-23 00:32:44.300981 | orchestrator | changed: [testbed-manager] 2025-11-23 00:32:44.300985 | orchestrator | 2025-11-23 00:32:44.300989 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-11-23 00:32:44.300993 | orchestrator | Sunday 23 November 2025 00:32:40 +0000 (0:00:01.036) 0:00:09.888 ******* 2025-11-23 00:32:44.300997 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-23 00:32:44.301002 | orchestrator | changed: [testbed-manager] 2025-11-23 00:32:44.301006 | orchestrator | 2025-11-23 00:32:44.301010 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-11-23 00:32:44.301015 | orchestrator | Sunday 23 November 2025 00:32:41 +0000 (0:00:00.849) 0:00:10.738 ******* 2025-11-23 00:32:44.301019 | orchestrator | changed: [testbed-manager] 2025-11-23 00:32:44.301023 | orchestrator | 2025-11-23 00:32:44.301027 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-11-23 00:32:44.301031 | orchestrator | Sunday 23 November 2025 00:32:43 +0000 (0:00:01.492) 0:00:12.230 ******* 2025-11-23 00:32:44.301036 | orchestrator | changed: [testbed-manager] 2025-11-23 00:32:44.301040 | orchestrator | 2025-11-23 00:32:44.301044 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:32:44.301048 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:32:44.301053 | orchestrator | 2025-11-23 00:32:44.301058 | orchestrator | 2025-11-23 00:32:44.301062 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:32:44.301066 | orchestrator | Sunday 23 November 2025 00:32:44 +0000 (0:00:00.851) 0:00:13.082 ******* 2025-11-23 00:32:44.301070 | orchestrator | =============================================================================== 2025-11-23 00:32:44.301075 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.30s 2025-11-23 00:32:44.301079 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.49s 2025-11-23 00:32:44.301083 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.20s 2025-11-23 00:32:44.301087 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.04s 2025-11-23 00:32:44.301103 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.85s 2025-11-23 00:32:44.301108 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.85s 2025-11-23 00:32:44.301112 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-11-23 00:32:44.301116 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.51s 2025-11-23 00:32:44.301121 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2025-11-23 00:32:44.301125 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2025-11-23 00:32:44.301129 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.36s 2025-11-23 00:32:44.534439 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-11-23 00:32:44.561396 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-11-23 00:32:44.561469 | orchestrator | Dload Upload Total Spent Left Speed 2025-11-23 00:32:44.633091 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 211 0 --:--:-- --:--:-- --:--:-- 214 2025-11-23 00:32:44.645657 | orchestrator | + osism apply --environment custom workarounds 2025-11-23 00:32:46.324753 | orchestrator | 2025-11-23 00:32:46 | INFO  | Trying to run play workarounds in environment custom 2025-11-23 00:32:56.565832 | orchestrator | 2025-11-23 00:32:56 | INFO  | Task d3bbdc02-fc69-4bee-be9b-70d60999d05f (workarounds) was prepared for execution. 2025-11-23 00:32:56.565960 | orchestrator | 2025-11-23 00:32:56 | INFO  | It takes a moment until task d3bbdc02-fc69-4bee-be9b-70d60999d05f (workarounds) has been started and output is visible here. 2025-11-23 00:33:18.924101 | orchestrator | 2025-11-23 00:33:18.924254 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:33:18.924284 | orchestrator | 2025-11-23 00:33:18.924303 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-11-23 00:33:18.924321 | orchestrator | Sunday 23 November 2025 00:33:00 +0000 (0:00:00.111) 0:00:00.111 ******* 2025-11-23 00:33:18.924339 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-11-23 00:33:18.924393 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-11-23 00:33:18.924411 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-11-23 00:33:18.924428 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-11-23 00:33:18.924448 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-11-23 00:33:18.924466 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-11-23 00:33:18.924485 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-11-23 00:33:18.924503 | orchestrator | 2025-11-23 00:33:18.924523 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-11-23 00:33:18.924535 | orchestrator | 2025-11-23 00:33:18.924546 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-23 00:33:18.924557 | orchestrator | Sunday 23 November 2025 00:33:00 +0000 (0:00:00.669) 0:00:00.780 ******* 2025-11-23 00:33:18.924568 | orchestrator | ok: [testbed-manager] 2025-11-23 00:33:18.924580 | orchestrator | 2025-11-23 00:33:18.924591 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-11-23 00:33:18.924602 | orchestrator | 2025-11-23 00:33:18.924613 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-23 00:33:18.924624 | orchestrator | Sunday 23 November 2025 00:33:03 +0000 (0:00:02.144) 0:00:02.925 ******* 2025-11-23 00:33:18.924635 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:33:18.924646 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:33:18.924657 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:33:18.924668 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:33:18.924678 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:33:18.924689 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:33:18.924700 | orchestrator | 2025-11-23 00:33:18.924711 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-11-23 00:33:18.924722 | orchestrator | 2025-11-23 00:33:18.924732 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-11-23 00:33:18.924743 | orchestrator | Sunday 23 November 2025 00:33:04 +0000 (0:00:01.710) 0:00:04.636 ******* 2025-11-23 00:33:18.924755 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-23 00:33:18.924767 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-23 00:33:18.924778 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-23 00:33:18.924789 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-23 00:33:18.924799 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-23 00:33:18.924838 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-23 00:33:18.924849 | orchestrator | 2025-11-23 00:33:18.924860 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-11-23 00:33:18.924871 | orchestrator | Sunday 23 November 2025 00:33:06 +0000 (0:00:01.444) 0:00:06.080 ******* 2025-11-23 00:33:18.924882 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:33:18.924893 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:33:18.924903 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:33:18.924914 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:33:18.924925 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:33:18.924935 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:33:18.924946 | orchestrator | 2025-11-23 00:33:18.924956 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-11-23 00:33:18.924967 | orchestrator | Sunday 23 November 2025 00:33:09 +0000 (0:00:03.647) 0:00:09.728 ******* 2025-11-23 00:33:18.924978 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:33:18.924988 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:33:18.924999 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:33:18.925009 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:33:18.925020 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:33:18.925030 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:33:18.925041 | orchestrator | 2025-11-23 00:33:18.925051 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-11-23 00:33:18.925062 | orchestrator | 2025-11-23 00:33:18.925073 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-11-23 00:33:18.925083 | orchestrator | Sunday 23 November 2025 00:33:10 +0000 (0:00:00.553) 0:00:10.282 ******* 2025-11-23 00:33:18.925094 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:33:18.925104 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:33:18.925115 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:33:18.925125 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:33:18.925136 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:33:18.925146 | orchestrator | changed: [testbed-manager] 2025-11-23 00:33:18.925157 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:33:18.925167 | orchestrator | 2025-11-23 00:33:18.925178 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-11-23 00:33:18.925189 | orchestrator | Sunday 23 November 2025 00:33:11 +0000 (0:00:01.358) 0:00:11.640 ******* 2025-11-23 00:33:18.925199 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:33:18.925226 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:33:18.925237 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:33:18.925248 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:33:18.925259 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:33:18.925269 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:33:18.925300 | orchestrator | changed: [testbed-manager] 2025-11-23 00:33:18.925311 | orchestrator | 2025-11-23 00:33:18.925322 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-11-23 00:33:18.925332 | orchestrator | Sunday 23 November 2025 00:33:13 +0000 (0:00:01.336) 0:00:12.977 ******* 2025-11-23 00:33:18.925343 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:33:18.925402 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:33:18.925415 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:33:18.925425 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:33:18.925436 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:33:18.925447 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:33:18.925457 | orchestrator | ok: [testbed-manager] 2025-11-23 00:33:18.925468 | orchestrator | 2025-11-23 00:33:18.925479 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-11-23 00:33:18.925490 | orchestrator | Sunday 23 November 2025 00:33:14 +0000 (0:00:01.343) 0:00:14.320 ******* 2025-11-23 00:33:18.925500 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:33:18.925511 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:33:18.925521 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:33:18.925542 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:33:18.925552 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:33:18.925563 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:33:18.925574 | orchestrator | changed: [testbed-manager] 2025-11-23 00:33:18.925584 | orchestrator | 2025-11-23 00:33:18.925595 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-11-23 00:33:18.925605 | orchestrator | Sunday 23 November 2025 00:33:15 +0000 (0:00:01.566) 0:00:15.887 ******* 2025-11-23 00:33:18.925616 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:33:18.925626 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:33:18.925637 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:33:18.925647 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:33:18.925658 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:33:18.925668 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:33:18.925679 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:33:18.925689 | orchestrator | 2025-11-23 00:33:18.925700 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-11-23 00:33:18.925710 | orchestrator | 2025-11-23 00:33:18.925721 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-11-23 00:33:18.925732 | orchestrator | Sunday 23 November 2025 00:33:16 +0000 (0:00:00.516) 0:00:16.404 ******* 2025-11-23 00:33:18.925742 | orchestrator | ok: [testbed-manager] 2025-11-23 00:33:18.925753 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:33:18.925763 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:33:18.925774 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:33:18.925784 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:33:18.925795 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:33:18.925805 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:33:18.925816 | orchestrator | 2025-11-23 00:33:18.925827 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:33:18.925839 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:33:18.925851 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:18.925862 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:18.925873 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:18.925884 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:18.925894 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:18.925905 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:18.925915 | orchestrator | 2025-11-23 00:33:18.925926 | orchestrator | 2025-11-23 00:33:18.925937 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:33:18.925948 | orchestrator | Sunday 23 November 2025 00:33:18 +0000 (0:00:02.391) 0:00:18.796 ******* 2025-11-23 00:33:18.925958 | orchestrator | =============================================================================== 2025-11-23 00:33:18.925969 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.65s 2025-11-23 00:33:18.925979 | orchestrator | Install python3-docker -------------------------------------------------- 2.39s 2025-11-23 00:33:18.925990 | orchestrator | Apply netplan configuration --------------------------------------------- 2.14s 2025-11-23 00:33:18.926000 | orchestrator | Apply netplan configuration --------------------------------------------- 1.71s 2025-11-23 00:33:18.926066 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.57s 2025-11-23 00:33:18.926080 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.44s 2025-11-23 00:33:18.926090 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.36s 2025-11-23 00:33:18.926101 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.34s 2025-11-23 00:33:18.926118 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.34s 2025-11-23 00:33:18.926129 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.67s 2025-11-23 00:33:18.926140 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.55s 2025-11-23 00:33:18.926158 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.52s 2025-11-23 00:33:19.297311 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-11-23 00:33:31.068268 | orchestrator | 2025-11-23 00:33:31 | INFO  | Task 880a2810-8bd7-40d5-8d3b-07c2f497f8a7 (reboot) was prepared for execution. 2025-11-23 00:33:31.068439 | orchestrator | 2025-11-23 00:33:31 | INFO  | It takes a moment until task 880a2810-8bd7-40d5-8d3b-07c2f497f8a7 (reboot) has been started and output is visible here. 2025-11-23 00:33:40.124327 | orchestrator | 2025-11-23 00:33:40.124409 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-23 00:33:40.124421 | orchestrator | 2025-11-23 00:33:40.124431 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-23 00:33:40.124440 | orchestrator | Sunday 23 November 2025 00:33:34 +0000 (0:00:00.168) 0:00:00.168 ******* 2025-11-23 00:33:40.124449 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:33:40.124459 | orchestrator | 2025-11-23 00:33:40.124468 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-23 00:33:40.124477 | orchestrator | Sunday 23 November 2025 00:33:34 +0000 (0:00:00.096) 0:00:00.265 ******* 2025-11-23 00:33:40.124485 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:33:40.124494 | orchestrator | 2025-11-23 00:33:40.124503 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-23 00:33:40.124511 | orchestrator | Sunday 23 November 2025 00:33:35 +0000 (0:00:00.853) 0:00:01.119 ******* 2025-11-23 00:33:40.124520 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:33:40.124528 | orchestrator | 2025-11-23 00:33:40.124537 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-23 00:33:40.124545 | orchestrator | 2025-11-23 00:33:40.124554 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-23 00:33:40.124562 | orchestrator | Sunday 23 November 2025 00:33:35 +0000 (0:00:00.100) 0:00:01.219 ******* 2025-11-23 00:33:40.124571 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:33:40.124579 | orchestrator | 2025-11-23 00:33:40.124588 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-23 00:33:40.124597 | orchestrator | Sunday 23 November 2025 00:33:35 +0000 (0:00:00.100) 0:00:01.319 ******* 2025-11-23 00:33:40.124605 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:33:40.124614 | orchestrator | 2025-11-23 00:33:40.124623 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-23 00:33:40.124632 | orchestrator | Sunday 23 November 2025 00:33:36 +0000 (0:00:00.637) 0:00:01.957 ******* 2025-11-23 00:33:40.124640 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:33:40.124649 | orchestrator | 2025-11-23 00:33:40.124657 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-23 00:33:40.124666 | orchestrator | 2025-11-23 00:33:40.124675 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-23 00:33:40.124685 | orchestrator | Sunday 23 November 2025 00:33:36 +0000 (0:00:00.098) 0:00:02.055 ******* 2025-11-23 00:33:40.124696 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:33:40.124707 | orchestrator | 2025-11-23 00:33:40.124717 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-23 00:33:40.124753 | orchestrator | Sunday 23 November 2025 00:33:36 +0000 (0:00:00.140) 0:00:02.196 ******* 2025-11-23 00:33:40.124764 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:33:40.124775 | orchestrator | 2025-11-23 00:33:40.124786 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-23 00:33:40.124796 | orchestrator | Sunday 23 November 2025 00:33:37 +0000 (0:00:00.640) 0:00:02.836 ******* 2025-11-23 00:33:40.124807 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:33:40.124817 | orchestrator | 2025-11-23 00:33:40.124828 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-23 00:33:40.124838 | orchestrator | 2025-11-23 00:33:40.124849 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-23 00:33:40.124859 | orchestrator | Sunday 23 November 2025 00:33:37 +0000 (0:00:00.096) 0:00:02.932 ******* 2025-11-23 00:33:40.124870 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:33:40.124880 | orchestrator | 2025-11-23 00:33:40.124891 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-23 00:33:40.124903 | orchestrator | Sunday 23 November 2025 00:33:37 +0000 (0:00:00.091) 0:00:03.024 ******* 2025-11-23 00:33:40.124916 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:33:40.124927 | orchestrator | 2025-11-23 00:33:40.124940 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-23 00:33:40.124951 | orchestrator | Sunday 23 November 2025 00:33:38 +0000 (0:00:00.630) 0:00:03.655 ******* 2025-11-23 00:33:40.124964 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:33:40.124976 | orchestrator | 2025-11-23 00:33:40.124988 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-23 00:33:40.125000 | orchestrator | 2025-11-23 00:33:40.125012 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-23 00:33:40.125024 | orchestrator | Sunday 23 November 2025 00:33:38 +0000 (0:00:00.120) 0:00:03.776 ******* 2025-11-23 00:33:40.125036 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:33:40.125048 | orchestrator | 2025-11-23 00:33:40.125060 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-23 00:33:40.125072 | orchestrator | Sunday 23 November 2025 00:33:38 +0000 (0:00:00.084) 0:00:03.861 ******* 2025-11-23 00:33:40.125084 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:33:40.125096 | orchestrator | 2025-11-23 00:33:40.125108 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-23 00:33:40.125120 | orchestrator | Sunday 23 November 2025 00:33:39 +0000 (0:00:00.632) 0:00:04.493 ******* 2025-11-23 00:33:40.125132 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:33:40.125144 | orchestrator | 2025-11-23 00:33:40.125170 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-23 00:33:40.125183 | orchestrator | 2025-11-23 00:33:40.125195 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-23 00:33:40.125206 | orchestrator | Sunday 23 November 2025 00:33:39 +0000 (0:00:00.098) 0:00:04.591 ******* 2025-11-23 00:33:40.125217 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:33:40.125227 | orchestrator | 2025-11-23 00:33:40.125238 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-23 00:33:40.125248 | orchestrator | Sunday 23 November 2025 00:33:39 +0000 (0:00:00.110) 0:00:04.701 ******* 2025-11-23 00:33:40.125259 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:33:40.125269 | orchestrator | 2025-11-23 00:33:40.125280 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-23 00:33:40.125291 | orchestrator | Sunday 23 November 2025 00:33:39 +0000 (0:00:00.635) 0:00:05.337 ******* 2025-11-23 00:33:40.125314 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:33:40.125325 | orchestrator | 2025-11-23 00:33:40.125336 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:33:40.125348 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:40.125400 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:40.125411 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:40.125422 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:40.125432 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:40.125443 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:33:40.125454 | orchestrator | 2025-11-23 00:33:40.125464 | orchestrator | 2025-11-23 00:33:40.125475 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:33:40.125486 | orchestrator | Sunday 23 November 2025 00:33:39 +0000 (0:00:00.033) 0:00:05.371 ******* 2025-11-23 00:33:40.125497 | orchestrator | =============================================================================== 2025-11-23 00:33:40.125507 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.03s 2025-11-23 00:33:40.125518 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.62s 2025-11-23 00:33:40.125528 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2025-11-23 00:33:40.304845 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-11-23 00:33:52.223993 | orchestrator | 2025-11-23 00:33:52 | INFO  | Task 29c0a159-d3ae-4439-965e-0f595e95326b (wait-for-connection) was prepared for execution. 2025-11-23 00:33:52.224110 | orchestrator | 2025-11-23 00:33:52 | INFO  | It takes a moment until task 29c0a159-d3ae-4439-965e-0f595e95326b (wait-for-connection) has been started and output is visible here. 2025-11-23 00:34:07.498514 | orchestrator | 2025-11-23 00:34:07.498596 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-11-23 00:34:07.498604 | orchestrator | 2025-11-23 00:34:07.498609 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-11-23 00:34:07.498615 | orchestrator | Sunday 23 November 2025 00:33:55 +0000 (0:00:00.171) 0:00:00.171 ******* 2025-11-23 00:34:07.498620 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:34:07.498626 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:34:07.498631 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:34:07.498636 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:34:07.498640 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:34:07.498645 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:34:07.498650 | orchestrator | 2025-11-23 00:34:07.498655 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:34:07.498660 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:34:07.498666 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:34:07.498671 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:34:07.498676 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:34:07.498680 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:34:07.498685 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:34:07.498707 | orchestrator | 2025-11-23 00:34:07.498712 | orchestrator | 2025-11-23 00:34:07.498717 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:34:07.498732 | orchestrator | Sunday 23 November 2025 00:34:07 +0000 (0:00:11.407) 0:00:11.578 ******* 2025-11-23 00:34:07.498737 | orchestrator | =============================================================================== 2025-11-23 00:34:07.498741 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.41s 2025-11-23 00:34:07.677582 | orchestrator | + osism apply hddtemp 2025-11-23 00:34:19.468204 | orchestrator | 2025-11-23 00:34:19 | INFO  | Task fb4aa18e-443a-4aec-9c57-dc15f99f2925 (hddtemp) was prepared for execution. 2025-11-23 00:34:19.468341 | orchestrator | 2025-11-23 00:34:19 | INFO  | It takes a moment until task fb4aa18e-443a-4aec-9c57-dc15f99f2925 (hddtemp) has been started and output is visible here. 2025-11-23 00:34:44.942190 | orchestrator | 2025-11-23 00:34:44.942292 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-11-23 00:34:44.942307 | orchestrator | 2025-11-23 00:34:44.942317 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-11-23 00:34:44.942328 | orchestrator | Sunday 23 November 2025 00:34:23 +0000 (0:00:00.223) 0:00:00.223 ******* 2025-11-23 00:34:44.942338 | orchestrator | ok: [testbed-manager] 2025-11-23 00:34:44.942412 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:34:44.942423 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:34:44.942433 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:34:44.942443 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:34:44.942453 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:34:44.942463 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:34:44.942473 | orchestrator | 2025-11-23 00:34:44.942483 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-11-23 00:34:44.942493 | orchestrator | Sunday 23 November 2025 00:34:23 +0000 (0:00:00.606) 0:00:00.830 ******* 2025-11-23 00:34:44.942505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:34:44.942517 | orchestrator | 2025-11-23 00:34:44.942527 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-11-23 00:34:44.942537 | orchestrator | Sunday 23 November 2025 00:34:24 +0000 (0:00:01.017) 0:00:01.847 ******* 2025-11-23 00:34:44.942547 | orchestrator | ok: [testbed-manager] 2025-11-23 00:34:44.942557 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:34:44.942567 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:34:44.942576 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:34:44.942586 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:34:44.942595 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:34:44.942605 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:34:44.942615 | orchestrator | 2025-11-23 00:34:44.942624 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-11-23 00:34:44.942634 | orchestrator | Sunday 23 November 2025 00:34:26 +0000 (0:00:01.912) 0:00:03.760 ******* 2025-11-23 00:34:44.942644 | orchestrator | changed: [testbed-manager] 2025-11-23 00:34:44.942654 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:34:44.942664 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:34:44.942674 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:34:44.942684 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:34:44.942693 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:34:44.942703 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:34:44.942712 | orchestrator | 2025-11-23 00:34:44.942722 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-11-23 00:34:44.942732 | orchestrator | Sunday 23 November 2025 00:34:27 +0000 (0:00:01.123) 0:00:04.883 ******* 2025-11-23 00:34:44.942742 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:34:44.942751 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:34:44.942761 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:34:44.942793 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:34:44.942804 | orchestrator | ok: [testbed-manager] 2025-11-23 00:34:44.942813 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:34:44.942823 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:34:44.942832 | orchestrator | 2025-11-23 00:34:44.942842 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-11-23 00:34:44.942852 | orchestrator | Sunday 23 November 2025 00:34:28 +0000 (0:00:01.057) 0:00:05.941 ******* 2025-11-23 00:34:44.942861 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:34:44.942871 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:34:44.942880 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:34:44.942889 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:34:44.942899 | orchestrator | changed: [testbed-manager] 2025-11-23 00:34:44.942908 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:34:44.942918 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:34:44.942927 | orchestrator | 2025-11-23 00:34:44.942937 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-11-23 00:34:44.942946 | orchestrator | Sunday 23 November 2025 00:34:29 +0000 (0:00:00.575) 0:00:06.517 ******* 2025-11-23 00:34:44.942956 | orchestrator | changed: [testbed-manager] 2025-11-23 00:34:44.942965 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:34:44.942975 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:34:44.942984 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:34:44.942993 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:34:44.943003 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:34:44.943013 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:34:44.943030 | orchestrator | 2025-11-23 00:34:44.943046 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-11-23 00:34:44.943063 | orchestrator | Sunday 23 November 2025 00:34:41 +0000 (0:00:12.444) 0:00:18.961 ******* 2025-11-23 00:34:44.943080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:34:44.943096 | orchestrator | 2025-11-23 00:34:44.943116 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-11-23 00:34:44.943138 | orchestrator | Sunday 23 November 2025 00:34:43 +0000 (0:00:01.035) 0:00:19.997 ******* 2025-11-23 00:34:44.943154 | orchestrator | changed: [testbed-manager] 2025-11-23 00:34:44.943169 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:34:44.943202 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:34:44.943218 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:34:44.943233 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:34:44.943250 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:34:44.943267 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:34:44.943284 | orchestrator | 2025-11-23 00:34:44.943301 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:34:44.943318 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:34:44.943383 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:34:44.943396 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:34:44.943406 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:34:44.943416 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:34:44.943425 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:34:44.943448 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:34:44.943461 | orchestrator | 2025-11-23 00:34:44.943480 | orchestrator | 2025-11-23 00:34:44.943498 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:34:44.943516 | orchestrator | Sunday 23 November 2025 00:34:44 +0000 (0:00:01.686) 0:00:21.684 ******* 2025-11-23 00:34:44.943544 | orchestrator | =============================================================================== 2025-11-23 00:34:44.943565 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.44s 2025-11-23 00:34:44.943585 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2025-11-23 00:34:44.943604 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.69s 2025-11-23 00:34:44.943623 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-11-23 00:34:44.943642 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.06s 2025-11-23 00:34:44.943659 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.04s 2025-11-23 00:34:44.943678 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2025-11-23 00:34:44.943697 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2025-11-23 00:34:44.943716 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.58s 2025-11-23 00:34:45.119106 | orchestrator | ++ semver latest 7.1.1 2025-11-23 00:34:45.174272 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-23 00:34:45.174435 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-23 00:34:45.174455 | orchestrator | + sudo systemctl restart manager.service 2025-11-23 00:35:22.014621 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-23 00:35:22.014731 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-23 00:35:22.014743 | orchestrator | + local max_attempts=60 2025-11-23 00:35:22.014752 | orchestrator | + local name=ceph-ansible 2025-11-23 00:35:22.014760 | orchestrator | + local attempt_num=1 2025-11-23 00:35:22.014769 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:22.049043 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:22.049126 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:22.049140 | orchestrator | + sleep 5 2025-11-23 00:35:27.052960 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:27.086505 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:27.086588 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:27.086603 | orchestrator | + sleep 5 2025-11-23 00:35:32.088781 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:32.111036 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:32.111132 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:32.111146 | orchestrator | + sleep 5 2025-11-23 00:35:37.115835 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:37.158067 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:37.158166 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:37.158183 | orchestrator | + sleep 5 2025-11-23 00:35:42.162945 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:42.201041 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:42.201146 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:42.201163 | orchestrator | + sleep 5 2025-11-23 00:35:47.205783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:47.245603 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:47.245662 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:47.245670 | orchestrator | + sleep 5 2025-11-23 00:35:52.250809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:52.289889 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:52.289994 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:52.290181 | orchestrator | + sleep 5 2025-11-23 00:35:57.293071 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:35:57.320500 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-23 00:35:57.320585 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:35:57.320599 | orchestrator | + sleep 5 2025-11-23 00:36:02.323581 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:36:02.350592 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:02.350703 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:36:02.350727 | orchestrator | + sleep 5 2025-11-23 00:36:07.353860 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:36:07.392464 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:07.392563 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:36:07.392578 | orchestrator | + sleep 5 2025-11-23 00:36:12.396978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:36:12.421818 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:12.421903 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:36:12.421916 | orchestrator | + sleep 5 2025-11-23 00:36:17.425244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:36:17.462649 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:17.462744 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:36:17.462759 | orchestrator | + sleep 5 2025-11-23 00:36:22.467623 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:36:22.505044 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:22.505145 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-23 00:36:22.505162 | orchestrator | + sleep 5 2025-11-23 00:36:27.509185 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-23 00:36:27.547626 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:27.547730 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-23 00:36:27.547747 | orchestrator | + local max_attempts=60 2025-11-23 00:36:27.547761 | orchestrator | + local name=kolla-ansible 2025-11-23 00:36:27.547773 | orchestrator | + local attempt_num=1 2025-11-23 00:36:27.549004 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-23 00:36:27.584635 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:27.584717 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-23 00:36:27.584729 | orchestrator | + local max_attempts=60 2025-11-23 00:36:27.584741 | orchestrator | + local name=osism-ansible 2025-11-23 00:36:27.584751 | orchestrator | + local attempt_num=1 2025-11-23 00:36:27.585317 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-23 00:36:27.620604 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-23 00:36:27.620687 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-23 00:36:27.620702 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-23 00:36:27.774490 | orchestrator | ARA in ceph-ansible already disabled. 2025-11-23 00:36:27.920888 | orchestrator | ARA in kolla-ansible already disabled. 2025-11-23 00:36:28.222570 | orchestrator | ARA in osism-kubernetes already disabled. 2025-11-23 00:36:28.222703 | orchestrator | + osism apply gather-facts 2025-11-23 00:36:40.009418 | orchestrator | 2025-11-23 00:36:40 | INFO  | Task d04d2bd9-4f72-4917-b786-1b7d66db7e79 (gather-facts) was prepared for execution. 2025-11-23 00:36:40.009524 | orchestrator | 2025-11-23 00:36:40 | INFO  | It takes a moment until task d04d2bd9-4f72-4917-b786-1b7d66db7e79 (gather-facts) has been started and output is visible here. 2025-11-23 00:36:52.703092 | orchestrator | 2025-11-23 00:36:52.703178 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-23 00:36:52.703190 | orchestrator | 2025-11-23 00:36:52.703197 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-23 00:36:52.703205 | orchestrator | Sunday 23 November 2025 00:36:43 +0000 (0:00:00.193) 0:00:00.193 ******* 2025-11-23 00:36:52.703212 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:36:52.703219 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:36:52.703227 | orchestrator | ok: [testbed-manager] 2025-11-23 00:36:52.703233 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:36:52.703242 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:36:52.703278 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:36:52.703291 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:36:52.703302 | orchestrator | 2025-11-23 00:36:52.703311 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-23 00:36:52.703318 | orchestrator | 2025-11-23 00:36:52.703325 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-23 00:36:52.703331 | orchestrator | Sunday 23 November 2025 00:36:51 +0000 (0:00:08.231) 0:00:08.424 ******* 2025-11-23 00:36:52.703337 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:36:52.703346 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:36:52.703357 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:36:52.703426 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:36:52.703433 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:36:52.703444 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:36:52.703454 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:36:52.703465 | orchestrator | 2025-11-23 00:36:52.703475 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:36:52.703485 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:36:52.703496 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:36:52.703506 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:36:52.703515 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:36:52.703525 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:36:52.703534 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:36:52.703544 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:36:52.703555 | orchestrator | 2025-11-23 00:36:52.703565 | orchestrator | 2025-11-23 00:36:52.703576 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:36:52.703586 | orchestrator | Sunday 23 November 2025 00:36:52 +0000 (0:00:00.499) 0:00:08.924 ******* 2025-11-23 00:36:52.703614 | orchestrator | =============================================================================== 2025-11-23 00:36:52.703631 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.23s 2025-11-23 00:36:52.703642 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-11-23 00:36:52.919570 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-11-23 00:36:52.938943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-11-23 00:36:52.953928 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-11-23 00:36:52.968938 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-11-23 00:36:52.982625 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-11-23 00:36:52.996020 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-11-23 00:36:53.016542 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-11-23 00:36:53.033808 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-11-23 00:36:53.046699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-11-23 00:36:53.056391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-11-23 00:36:53.064707 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-11-23 00:36:53.077939 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-11-23 00:36:53.088979 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-11-23 00:36:53.106770 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-11-23 00:36:53.115561 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-11-23 00:36:53.126413 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-11-23 00:36:53.134743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-11-23 00:36:53.144409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-11-23 00:36:53.152453 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-11-23 00:36:53.164146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-11-23 00:36:53.174234 | orchestrator | + [[ false == \t\r\u\e ]] 2025-11-23 00:36:53.512878 | orchestrator | ok: Runtime: 0:23:10.503007 2025-11-23 00:36:53.600374 | 2025-11-23 00:36:53.600513 | TASK [Deploy services] 2025-11-23 00:36:54.132896 | orchestrator | skipping: Conditional result was False 2025-11-23 00:36:54.142975 | 2025-11-23 00:36:54.143166 | TASK [Deploy in a nutshell] 2025-11-23 00:36:54.855307 | orchestrator | + set -e 2025-11-23 00:36:54.855470 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-23 00:36:54.855494 | orchestrator | ++ export INTERACTIVE=false 2025-11-23 00:36:54.855503 | orchestrator | ++ INTERACTIVE=false 2025-11-23 00:36:54.855509 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-23 00:36:54.855513 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-23 00:36:54.855528 | orchestrator | + source /opt/manager-vars.sh 2025-11-23 00:36:54.855548 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-23 00:36:54.855560 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-23 00:36:54.855566 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-23 00:36:54.855572 | orchestrator | ++ CEPH_VERSION=reef 2025-11-23 00:36:54.855576 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-23 00:36:54.855584 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-23 00:36:54.855588 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-23 00:36:54.855596 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-23 00:36:54.855600 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-23 00:36:54.855606 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-23 00:36:54.855610 | orchestrator | ++ export ARA=false 2025-11-23 00:36:54.855614 | orchestrator | ++ ARA=false 2025-11-23 00:36:54.855673 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-23 00:36:54.855678 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-23 00:36:54.855682 | orchestrator | ++ export TEMPEST=true 2025-11-23 00:36:54.855686 | orchestrator | ++ TEMPEST=true 2025-11-23 00:36:54.855690 | orchestrator | ++ export IS_ZUUL=true 2025-11-23 00:36:54.855694 | orchestrator | ++ IS_ZUUL=true 2025-11-23 00:36:54.855698 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.118 2025-11-23 00:36:54.855702 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.118 2025-11-23 00:36:54.855706 | orchestrator | ++ export EXTERNAL_API=false 2025-11-23 00:36:54.855709 | orchestrator | ++ EXTERNAL_API=false 2025-11-23 00:36:54.855713 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-23 00:36:54.855717 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-23 00:36:54.855721 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-23 00:36:54.855725 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-23 00:36:54.855731 | orchestrator | 2025-11-23 00:36:54.855736 | orchestrator | # PULL IMAGES 2025-11-23 00:36:54.855740 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-23 00:36:54.855748 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-23 00:36:54.855752 | orchestrator | + echo 2025-11-23 00:36:54.855756 | orchestrator | + echo '# PULL IMAGES' 2025-11-23 00:36:54.855884 | orchestrator | 2025-11-23 00:36:54.855891 | orchestrator | + echo 2025-11-23 00:36:54.857433 | orchestrator | ++ semver latest 7.0.0 2025-11-23 00:36:54.914253 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-23 00:36:54.914306 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-23 00:36:54.914314 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-11-23 00:36:56.622913 | orchestrator | 2025-11-23 00:36:56 | INFO  | Trying to run play pull-images in environment custom 2025-11-23 00:37:06.882188 | orchestrator | 2025-11-23 00:37:06 | INFO  | Task e5dbae5a-20f1-4e6d-928d-a0d1d3a1fdcd (pull-images) was prepared for execution. 2025-11-23 00:37:06.882310 | orchestrator | 2025-11-23 00:37:06 | INFO  | Task e5dbae5a-20f1-4e6d-928d-a0d1d3a1fdcd is running in background. No more output. Check ARA for logs. 2025-11-23 00:37:08.828259 | orchestrator | 2025-11-23 00:37:08 | INFO  | Trying to run play wipe-partitions in environment custom 2025-11-23 00:37:18.978699 | orchestrator | 2025-11-23 00:37:18 | INFO  | Task d183488a-dcf4-484a-bba3-e1a198f618e1 (wipe-partitions) was prepared for execution. 2025-11-23 00:37:18.978832 | orchestrator | 2025-11-23 00:37:18 | INFO  | It takes a moment until task d183488a-dcf4-484a-bba3-e1a198f618e1 (wipe-partitions) has been started and output is visible here. 2025-11-23 00:37:30.599122 | orchestrator | 2025-11-23 00:37:30.599296 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-11-23 00:37:30.599375 | orchestrator | 2025-11-23 00:37:30.599427 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-11-23 00:37:30.599456 | orchestrator | Sunday 23 November 2025 00:37:22 +0000 (0:00:00.130) 0:00:00.130 ******* 2025-11-23 00:37:30.599477 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:37:30.599498 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:37:30.599516 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:37:30.599535 | orchestrator | 2025-11-23 00:37:30.599554 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-11-23 00:37:30.599608 | orchestrator | Sunday 23 November 2025 00:37:23 +0000 (0:00:00.524) 0:00:00.654 ******* 2025-11-23 00:37:30.599627 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:37:30.599646 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:37:30.599672 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:37:30.599691 | orchestrator | 2025-11-23 00:37:30.599709 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-11-23 00:37:30.599728 | orchestrator | Sunday 23 November 2025 00:37:23 +0000 (0:00:00.314) 0:00:00.968 ******* 2025-11-23 00:37:30.599747 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:37:30.599767 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:37:30.599782 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:37:30.599794 | orchestrator | 2025-11-23 00:37:30.599805 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-11-23 00:37:30.599816 | orchestrator | Sunday 23 November 2025 00:37:24 +0000 (0:00:00.520) 0:00:01.489 ******* 2025-11-23 00:37:30.599827 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:37:30.599838 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:37:30.599848 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:37:30.599859 | orchestrator | 2025-11-23 00:37:30.599870 | orchestrator | TASK [Check device availability] *********************************************** 2025-11-23 00:37:30.599881 | orchestrator | Sunday 23 November 2025 00:37:24 +0000 (0:00:00.238) 0:00:01.728 ******* 2025-11-23 00:37:30.599892 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-23 00:37:30.599908 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-23 00:37:30.599919 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-23 00:37:30.599929 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-23 00:37:30.599940 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-23 00:37:30.599951 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-23 00:37:30.599962 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-23 00:37:30.599975 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-23 00:37:30.599988 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-23 00:37:30.600000 | orchestrator | 2025-11-23 00:37:30.600012 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-11-23 00:37:30.600026 | orchestrator | Sunday 23 November 2025 00:37:25 +0000 (0:00:01.139) 0:00:02.867 ******* 2025-11-23 00:37:30.600039 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-11-23 00:37:30.600051 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-11-23 00:37:30.600064 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-11-23 00:37:30.600076 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-11-23 00:37:30.600088 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-11-23 00:37:30.600100 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-11-23 00:37:30.600115 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-11-23 00:37:30.600127 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-11-23 00:37:30.600139 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-11-23 00:37:30.600151 | orchestrator | 2025-11-23 00:37:30.600163 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-11-23 00:37:30.600175 | orchestrator | Sunday 23 November 2025 00:37:27 +0000 (0:00:01.402) 0:00:04.269 ******* 2025-11-23 00:37:30.600187 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-23 00:37:30.600200 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-23 00:37:30.600212 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-23 00:37:30.600224 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-23 00:37:30.600236 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-23 00:37:30.600257 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-23 00:37:30.600270 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-23 00:37:30.600292 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-23 00:37:30.600303 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-23 00:37:30.600314 | orchestrator | 2025-11-23 00:37:30.600325 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-11-23 00:37:30.600336 | orchestrator | Sunday 23 November 2025 00:37:29 +0000 (0:00:02.051) 0:00:06.321 ******* 2025-11-23 00:37:30.600346 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:37:30.600357 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:37:30.600368 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:37:30.600378 | orchestrator | 2025-11-23 00:37:30.600533 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-11-23 00:37:30.600582 | orchestrator | Sunday 23 November 2025 00:37:29 +0000 (0:00:00.581) 0:00:06.902 ******* 2025-11-23 00:37:30.600609 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:37:30.600620 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:37:30.600631 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:37:30.600642 | orchestrator | 2025-11-23 00:37:30.600653 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:37:30.600667 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:30.600680 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:30.600716 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:30.600728 | orchestrator | 2025-11-23 00:37:30.600739 | orchestrator | 2025-11-23 00:37:30.600750 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:37:30.600761 | orchestrator | Sunday 23 November 2025 00:37:30 +0000 (0:00:00.616) 0:00:07.518 ******* 2025-11-23 00:37:30.600771 | orchestrator | =============================================================================== 2025-11-23 00:37:30.600782 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.05s 2025-11-23 00:37:30.600793 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-11-23 00:37:30.600803 | orchestrator | Check device availability ----------------------------------------------- 1.14s 2025-11-23 00:37:30.600814 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2025-11-23 00:37:30.600824 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-11-23 00:37:30.600835 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.52s 2025-11-23 00:37:30.600845 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.52s 2025-11-23 00:37:30.600856 | orchestrator | Remove all rook related logical devices --------------------------------- 0.31s 2025-11-23 00:37:30.600867 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-11-23 00:37:42.529629 | orchestrator | 2025-11-23 00:37:42 | INFO  | Task 48c969c4-0f0d-46cd-a9a0-9d243a1f471e (facts) was prepared for execution. 2025-11-23 00:37:42.529772 | orchestrator | 2025-11-23 00:37:42 | INFO  | It takes a moment until task 48c969c4-0f0d-46cd-a9a0-9d243a1f471e (facts) has been started and output is visible here. 2025-11-23 00:37:53.565356 | orchestrator | 2025-11-23 00:37:53.565495 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-23 00:37:53.565512 | orchestrator | 2025-11-23 00:37:53.565523 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-23 00:37:53.565534 | orchestrator | Sunday 23 November 2025 00:37:46 +0000 (0:00:00.194) 0:00:00.194 ******* 2025-11-23 00:37:53.565544 | orchestrator | ok: [testbed-manager] 2025-11-23 00:37:53.565555 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:37:53.565565 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:37:53.565598 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:37:53.565609 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:37:53.565618 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:37:53.565627 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:37:53.565637 | orchestrator | 2025-11-23 00:37:53.565649 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-23 00:37:53.565659 | orchestrator | Sunday 23 November 2025 00:37:47 +0000 (0:00:00.948) 0:00:01.143 ******* 2025-11-23 00:37:53.565668 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:37:53.565679 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:37:53.565688 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:37:53.565697 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:37:53.565707 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:37:53.565716 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:37:53.565725 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:37:53.565735 | orchestrator | 2025-11-23 00:37:53.565744 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-23 00:37:53.565754 | orchestrator | 2025-11-23 00:37:53.565763 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-23 00:37:53.565773 | orchestrator | Sunday 23 November 2025 00:37:48 +0000 (0:00:01.049) 0:00:02.192 ******* 2025-11-23 00:37:53.565782 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:37:53.565792 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:37:53.565802 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:37:53.565812 | orchestrator | ok: [testbed-manager] 2025-11-23 00:37:53.565821 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:37:53.565830 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:37:53.565840 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:37:53.565849 | orchestrator | 2025-11-23 00:37:53.565859 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-23 00:37:53.565868 | orchestrator | 2025-11-23 00:37:53.565877 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-23 00:37:53.565902 | orchestrator | Sunday 23 November 2025 00:37:52 +0000 (0:00:04.663) 0:00:06.856 ******* 2025-11-23 00:37:53.565914 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:37:53.565925 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:37:53.565936 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:37:53.565946 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:37:53.565957 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:37:53.565968 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:37:53.565979 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:37:53.565990 | orchestrator | 2025-11-23 00:37:53.566000 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:37:53.566012 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:53.566086 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:53.566098 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:53.566108 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:53.566119 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:53.566130 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:53.566141 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:37:53.566151 | orchestrator | 2025-11-23 00:37:53.566170 | orchestrator | 2025-11-23 00:37:53.566181 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:37:53.566192 | orchestrator | Sunday 23 November 2025 00:37:53 +0000 (0:00:00.474) 0:00:07.331 ******* 2025-11-23 00:37:53.566203 | orchestrator | =============================================================================== 2025-11-23 00:37:53.566214 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.66s 2025-11-23 00:37:53.566224 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2025-11-23 00:37:53.566235 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.95s 2025-11-23 00:37:53.566246 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-11-23 00:37:55.471655 | orchestrator | 2025-11-23 00:37:55 | INFO  | Task d0077284-39cf-47bf-b545-3bdf1f1a572d (ceph-configure-lvm-volumes) was prepared for execution. 2025-11-23 00:37:55.471768 | orchestrator | 2025-11-23 00:37:55 | INFO  | It takes a moment until task d0077284-39cf-47bf-b545-3bdf1f1a572d (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-11-23 00:38:05.562610 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-23 00:38:05.562728 | orchestrator | 2.16.14 2025-11-23 00:38:05.562746 | orchestrator | 2025-11-23 00:38:05.562759 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-23 00:38:05.562772 | orchestrator | 2025-11-23 00:38:05.562785 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-23 00:38:05.562798 | orchestrator | Sunday 23 November 2025 00:37:59 +0000 (0:00:00.282) 0:00:00.282 ******* 2025-11-23 00:38:05.562809 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-23 00:38:05.562820 | orchestrator | 2025-11-23 00:38:05.562832 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-23 00:38:05.562843 | orchestrator | Sunday 23 November 2025 00:37:59 +0000 (0:00:00.223) 0:00:00.506 ******* 2025-11-23 00:38:05.562854 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:38:05.562865 | orchestrator | 2025-11-23 00:38:05.562876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.562886 | orchestrator | Sunday 23 November 2025 00:37:59 +0000 (0:00:00.205) 0:00:00.712 ******* 2025-11-23 00:38:05.562898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-23 00:38:05.562909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-23 00:38:05.562920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-23 00:38:05.562931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-23 00:38:05.562942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-23 00:38:05.562953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-23 00:38:05.562964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-23 00:38:05.562975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-23 00:38:05.562986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-23 00:38:05.562997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-23 00:38:05.563017 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-23 00:38:05.563029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-23 00:38:05.563040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-23 00:38:05.563051 | orchestrator | 2025-11-23 00:38:05.563063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563099 | orchestrator | Sunday 23 November 2025 00:38:00 +0000 (0:00:00.391) 0:00:01.103 ******* 2025-11-23 00:38:05.563112 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563124 | orchestrator | 2025-11-23 00:38:05.563137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563150 | orchestrator | Sunday 23 November 2025 00:38:00 +0000 (0:00:00.173) 0:00:01.276 ******* 2025-11-23 00:38:05.563162 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563175 | orchestrator | 2025-11-23 00:38:05.563187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563200 | orchestrator | Sunday 23 November 2025 00:38:00 +0000 (0:00:00.214) 0:00:01.491 ******* 2025-11-23 00:38:05.563212 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563225 | orchestrator | 2025-11-23 00:38:05.563238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563255 | orchestrator | Sunday 23 November 2025 00:38:00 +0000 (0:00:00.186) 0:00:01.678 ******* 2025-11-23 00:38:05.563267 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563281 | orchestrator | 2025-11-23 00:38:05.563292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563303 | orchestrator | Sunday 23 November 2025 00:38:00 +0000 (0:00:00.182) 0:00:01.860 ******* 2025-11-23 00:38:05.563314 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563325 | orchestrator | 2025-11-23 00:38:05.563336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563347 | orchestrator | Sunday 23 November 2025 00:38:01 +0000 (0:00:00.207) 0:00:02.067 ******* 2025-11-23 00:38:05.563358 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563369 | orchestrator | 2025-11-23 00:38:05.563380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563391 | orchestrator | Sunday 23 November 2025 00:38:01 +0000 (0:00:00.181) 0:00:02.249 ******* 2025-11-23 00:38:05.563402 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563413 | orchestrator | 2025-11-23 00:38:05.563469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563480 | orchestrator | Sunday 23 November 2025 00:38:01 +0000 (0:00:00.191) 0:00:02.440 ******* 2025-11-23 00:38:05.563491 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.563501 | orchestrator | 2025-11-23 00:38:05.563512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563523 | orchestrator | Sunday 23 November 2025 00:38:01 +0000 (0:00:00.170) 0:00:02.611 ******* 2025-11-23 00:38:05.563534 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1) 2025-11-23 00:38:05.563546 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1) 2025-11-23 00:38:05.563556 | orchestrator | 2025-11-23 00:38:05.563567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563596 | orchestrator | Sunday 23 November 2025 00:38:02 +0000 (0:00:00.368) 0:00:02.980 ******* 2025-11-23 00:38:05.563608 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0) 2025-11-23 00:38:05.563619 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0) 2025-11-23 00:38:05.563629 | orchestrator | 2025-11-23 00:38:05.563640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563651 | orchestrator | Sunday 23 November 2025 00:38:02 +0000 (0:00:00.489) 0:00:03.469 ******* 2025-11-23 00:38:05.563661 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356) 2025-11-23 00:38:05.563672 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356) 2025-11-23 00:38:05.563683 | orchestrator | 2025-11-23 00:38:05.563693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563713 | orchestrator | Sunday 23 November 2025 00:38:03 +0000 (0:00:00.499) 0:00:03.969 ******* 2025-11-23 00:38:05.563724 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f) 2025-11-23 00:38:05.563734 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f) 2025-11-23 00:38:05.563745 | orchestrator | 2025-11-23 00:38:05.563756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:05.563766 | orchestrator | Sunday 23 November 2025 00:38:03 +0000 (0:00:00.622) 0:00:04.592 ******* 2025-11-23 00:38:05.563777 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-23 00:38:05.563788 | orchestrator | 2025-11-23 00:38:05.563804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.563815 | orchestrator | Sunday 23 November 2025 00:38:03 +0000 (0:00:00.307) 0:00:04.899 ******* 2025-11-23 00:38:05.563825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-23 00:38:05.563836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-23 00:38:05.563847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-23 00:38:05.563857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-23 00:38:05.563868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-23 00:38:05.563879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-23 00:38:05.563889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-23 00:38:05.563900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-23 00:38:05.563910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-23 00:38:05.563921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-23 00:38:05.563931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-23 00:38:05.563941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-23 00:38:05.563952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-23 00:38:05.563963 | orchestrator | 2025-11-23 00:38:05.563974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.563984 | orchestrator | Sunday 23 November 2025 00:38:04 +0000 (0:00:00.320) 0:00:05.220 ******* 2025-11-23 00:38:05.563995 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.564005 | orchestrator | 2025-11-23 00:38:05.564016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.564027 | orchestrator | Sunday 23 November 2025 00:38:04 +0000 (0:00:00.187) 0:00:05.408 ******* 2025-11-23 00:38:05.564037 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.564048 | orchestrator | 2025-11-23 00:38:05.564058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.564069 | orchestrator | Sunday 23 November 2025 00:38:04 +0000 (0:00:00.177) 0:00:05.585 ******* 2025-11-23 00:38:05.564079 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.564090 | orchestrator | 2025-11-23 00:38:05.564101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.564111 | orchestrator | Sunday 23 November 2025 00:38:04 +0000 (0:00:00.172) 0:00:05.758 ******* 2025-11-23 00:38:05.564122 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.564132 | orchestrator | 2025-11-23 00:38:05.564143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.564154 | orchestrator | Sunday 23 November 2025 00:38:05 +0000 (0:00:00.177) 0:00:05.935 ******* 2025-11-23 00:38:05.564170 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.564181 | orchestrator | 2025-11-23 00:38:05.564192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.564202 | orchestrator | Sunday 23 November 2025 00:38:05 +0000 (0:00:00.192) 0:00:06.128 ******* 2025-11-23 00:38:05.564213 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.564223 | orchestrator | 2025-11-23 00:38:05.564234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:05.564245 | orchestrator | Sunday 23 November 2025 00:38:05 +0000 (0:00:00.170) 0:00:06.299 ******* 2025-11-23 00:38:05.564255 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:05.564266 | orchestrator | 2025-11-23 00:38:05.564282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:12.080244 | orchestrator | Sunday 23 November 2025 00:38:05 +0000 (0:00:00.180) 0:00:06.479 ******* 2025-11-23 00:38:12.080355 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080374 | orchestrator | 2025-11-23 00:38:12.080386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:12.080398 | orchestrator | Sunday 23 November 2025 00:38:05 +0000 (0:00:00.179) 0:00:06.658 ******* 2025-11-23 00:38:12.080409 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-23 00:38:12.080481 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-23 00:38:12.080505 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-23 00:38:12.080523 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-23 00:38:12.080534 | orchestrator | 2025-11-23 00:38:12.080546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:12.080557 | orchestrator | Sunday 23 November 2025 00:38:06 +0000 (0:00:00.823) 0:00:07.482 ******* 2025-11-23 00:38:12.080569 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080580 | orchestrator | 2025-11-23 00:38:12.080591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:12.080602 | orchestrator | Sunday 23 November 2025 00:38:06 +0000 (0:00:00.185) 0:00:07.667 ******* 2025-11-23 00:38:12.080613 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080624 | orchestrator | 2025-11-23 00:38:12.080635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:12.080646 | orchestrator | Sunday 23 November 2025 00:38:06 +0000 (0:00:00.188) 0:00:07.855 ******* 2025-11-23 00:38:12.080657 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080667 | orchestrator | 2025-11-23 00:38:12.080678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:12.080689 | orchestrator | Sunday 23 November 2025 00:38:07 +0000 (0:00:00.173) 0:00:08.029 ******* 2025-11-23 00:38:12.080700 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080711 | orchestrator | 2025-11-23 00:38:12.080722 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-23 00:38:12.080733 | orchestrator | Sunday 23 November 2025 00:38:07 +0000 (0:00:00.179) 0:00:08.209 ******* 2025-11-23 00:38:12.080744 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-11-23 00:38:12.080755 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-11-23 00:38:12.080766 | orchestrator | 2025-11-23 00:38:12.080798 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-23 00:38:12.080811 | orchestrator | Sunday 23 November 2025 00:38:07 +0000 (0:00:00.155) 0:00:08.364 ******* 2025-11-23 00:38:12.080824 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080836 | orchestrator | 2025-11-23 00:38:12.080849 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-23 00:38:12.080861 | orchestrator | Sunday 23 November 2025 00:38:07 +0000 (0:00:00.117) 0:00:08.481 ******* 2025-11-23 00:38:12.080874 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080886 | orchestrator | 2025-11-23 00:38:12.080899 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-23 00:38:12.080935 | orchestrator | Sunday 23 November 2025 00:38:07 +0000 (0:00:00.115) 0:00:08.597 ******* 2025-11-23 00:38:12.080947 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.080960 | orchestrator | 2025-11-23 00:38:12.080972 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-23 00:38:12.080983 | orchestrator | Sunday 23 November 2025 00:38:07 +0000 (0:00:00.125) 0:00:08.723 ******* 2025-11-23 00:38:12.080994 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:38:12.081005 | orchestrator | 2025-11-23 00:38:12.081016 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-23 00:38:12.081027 | orchestrator | Sunday 23 November 2025 00:38:07 +0000 (0:00:00.126) 0:00:08.850 ******* 2025-11-23 00:38:12.081038 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}}) 2025-11-23 00:38:12.081050 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '939e3465-cd43-5a63-a3e3-1280596736df'}}) 2025-11-23 00:38:12.081061 | orchestrator | 2025-11-23 00:38:12.081072 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-23 00:38:12.081083 | orchestrator | Sunday 23 November 2025 00:38:08 +0000 (0:00:00.148) 0:00:08.998 ******* 2025-11-23 00:38:12.081095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}})  2025-11-23 00:38:12.081113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '939e3465-cd43-5a63-a3e3-1280596736df'}})  2025-11-23 00:38:12.081125 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081136 | orchestrator | 2025-11-23 00:38:12.081146 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-23 00:38:12.081157 | orchestrator | Sunday 23 November 2025 00:38:08 +0000 (0:00:00.126) 0:00:09.125 ******* 2025-11-23 00:38:12.081168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}})  2025-11-23 00:38:12.081179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '939e3465-cd43-5a63-a3e3-1280596736df'}})  2025-11-23 00:38:12.081190 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081201 | orchestrator | 2025-11-23 00:38:12.081212 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-23 00:38:12.081223 | orchestrator | Sunday 23 November 2025 00:38:08 +0000 (0:00:00.246) 0:00:09.372 ******* 2025-11-23 00:38:12.081234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}})  2025-11-23 00:38:12.081262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '939e3465-cd43-5a63-a3e3-1280596736df'}})  2025-11-23 00:38:12.081274 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081285 | orchestrator | 2025-11-23 00:38:12.081296 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-23 00:38:12.081312 | orchestrator | Sunday 23 November 2025 00:38:08 +0000 (0:00:00.141) 0:00:09.514 ******* 2025-11-23 00:38:12.081323 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:38:12.081334 | orchestrator | 2025-11-23 00:38:12.081345 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-23 00:38:12.081355 | orchestrator | Sunday 23 November 2025 00:38:08 +0000 (0:00:00.129) 0:00:09.643 ******* 2025-11-23 00:38:12.081366 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:38:12.081377 | orchestrator | 2025-11-23 00:38:12.081387 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-23 00:38:12.081398 | orchestrator | Sunday 23 November 2025 00:38:08 +0000 (0:00:00.132) 0:00:09.776 ******* 2025-11-23 00:38:12.081408 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081443 | orchestrator | 2025-11-23 00:38:12.081456 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-23 00:38:12.081467 | orchestrator | Sunday 23 November 2025 00:38:08 +0000 (0:00:00.122) 0:00:09.898 ******* 2025-11-23 00:38:12.081486 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081497 | orchestrator | 2025-11-23 00:38:12.081520 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-23 00:38:12.081531 | orchestrator | Sunday 23 November 2025 00:38:09 +0000 (0:00:00.122) 0:00:10.021 ******* 2025-11-23 00:38:12.081542 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081553 | orchestrator | 2025-11-23 00:38:12.081564 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-23 00:38:12.081574 | orchestrator | Sunday 23 November 2025 00:38:09 +0000 (0:00:00.112) 0:00:10.133 ******* 2025-11-23 00:38:12.081585 | orchestrator | ok: [testbed-node-3] => { 2025-11-23 00:38:12.081596 | orchestrator |  "ceph_osd_devices": { 2025-11-23 00:38:12.081607 | orchestrator |  "sdb": { 2025-11-23 00:38:12.081619 | orchestrator |  "osd_lvm_uuid": "b63f9958-8ac2-53b3-b8b4-a449f25b1af6" 2025-11-23 00:38:12.081630 | orchestrator |  }, 2025-11-23 00:38:12.081641 | orchestrator |  "sdc": { 2025-11-23 00:38:12.081652 | orchestrator |  "osd_lvm_uuid": "939e3465-cd43-5a63-a3e3-1280596736df" 2025-11-23 00:38:12.081663 | orchestrator |  } 2025-11-23 00:38:12.081674 | orchestrator |  } 2025-11-23 00:38:12.081685 | orchestrator | } 2025-11-23 00:38:12.081696 | orchestrator | 2025-11-23 00:38:12.081707 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-23 00:38:12.081717 | orchestrator | Sunday 23 November 2025 00:38:09 +0000 (0:00:00.137) 0:00:10.271 ******* 2025-11-23 00:38:12.081728 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081739 | orchestrator | 2025-11-23 00:38:12.081750 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-23 00:38:12.081761 | orchestrator | Sunday 23 November 2025 00:38:09 +0000 (0:00:00.116) 0:00:10.387 ******* 2025-11-23 00:38:12.081772 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081782 | orchestrator | 2025-11-23 00:38:12.081793 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-23 00:38:12.081804 | orchestrator | Sunday 23 November 2025 00:38:09 +0000 (0:00:00.127) 0:00:10.515 ******* 2025-11-23 00:38:12.081814 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:38:12.081825 | orchestrator | 2025-11-23 00:38:12.081836 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-23 00:38:12.081847 | orchestrator | Sunday 23 November 2025 00:38:09 +0000 (0:00:00.123) 0:00:10.638 ******* 2025-11-23 00:38:12.081857 | orchestrator | changed: [testbed-node-3] => { 2025-11-23 00:38:12.081868 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-23 00:38:12.081879 | orchestrator |  "ceph_osd_devices": { 2025-11-23 00:38:12.081890 | orchestrator |  "sdb": { 2025-11-23 00:38:12.081901 | orchestrator |  "osd_lvm_uuid": "b63f9958-8ac2-53b3-b8b4-a449f25b1af6" 2025-11-23 00:38:12.081912 | orchestrator |  }, 2025-11-23 00:38:12.081923 | orchestrator |  "sdc": { 2025-11-23 00:38:12.081934 | orchestrator |  "osd_lvm_uuid": "939e3465-cd43-5a63-a3e3-1280596736df" 2025-11-23 00:38:12.081945 | orchestrator |  } 2025-11-23 00:38:12.081956 | orchestrator |  }, 2025-11-23 00:38:12.081967 | orchestrator |  "lvm_volumes": [ 2025-11-23 00:38:12.081978 | orchestrator |  { 2025-11-23 00:38:12.081988 | orchestrator |  "data": "osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6", 2025-11-23 00:38:12.081999 | orchestrator |  "data_vg": "ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6" 2025-11-23 00:38:12.082010 | orchestrator |  }, 2025-11-23 00:38:12.082081 | orchestrator |  { 2025-11-23 00:38:12.082092 | orchestrator |  "data": "osd-block-939e3465-cd43-5a63-a3e3-1280596736df", 2025-11-23 00:38:12.082103 | orchestrator |  "data_vg": "ceph-939e3465-cd43-5a63-a3e3-1280596736df" 2025-11-23 00:38:12.082119 | orchestrator |  } 2025-11-23 00:38:12.082131 | orchestrator |  ] 2025-11-23 00:38:12.082142 | orchestrator |  } 2025-11-23 00:38:12.082159 | orchestrator | } 2025-11-23 00:38:12.082171 | orchestrator | 2025-11-23 00:38:12.082181 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-23 00:38:12.082192 | orchestrator | Sunday 23 November 2025 00:38:10 +0000 (0:00:00.296) 0:00:10.935 ******* 2025-11-23 00:38:12.082203 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-23 00:38:12.082213 | orchestrator | 2025-11-23 00:38:12.082241 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-23 00:38:12.082252 | orchestrator | 2025-11-23 00:38:12.082263 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-23 00:38:12.082274 | orchestrator | Sunday 23 November 2025 00:38:11 +0000 (0:00:01.637) 0:00:12.572 ******* 2025-11-23 00:38:12.082284 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-23 00:38:12.082295 | orchestrator | 2025-11-23 00:38:12.082306 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-23 00:38:12.082316 | orchestrator | Sunday 23 November 2025 00:38:11 +0000 (0:00:00.210) 0:00:12.782 ******* 2025-11-23 00:38:12.082327 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:38:12.082338 | orchestrator | 2025-11-23 00:38:12.082357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.820300 | orchestrator | Sunday 23 November 2025 00:38:12 +0000 (0:00:00.214) 0:00:12.997 ******* 2025-11-23 00:38:18.820410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-23 00:38:18.820459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-23 00:38:18.820480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-23 00:38:18.820507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-23 00:38:18.820528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-23 00:38:18.820545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-23 00:38:18.820563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-23 00:38:18.820582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-23 00:38:18.820600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-23 00:38:18.820619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-23 00:38:18.820637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-23 00:38:18.820653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-23 00:38:18.820665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-23 00:38:18.820677 | orchestrator | 2025-11-23 00:38:18.820689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.820700 | orchestrator | Sunday 23 November 2025 00:38:12 +0000 (0:00:00.328) 0:00:13.326 ******* 2025-11-23 00:38:18.820711 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.820724 | orchestrator | 2025-11-23 00:38:18.820735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.820746 | orchestrator | Sunday 23 November 2025 00:38:12 +0000 (0:00:00.187) 0:00:13.513 ******* 2025-11-23 00:38:18.820770 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.820781 | orchestrator | 2025-11-23 00:38:18.820792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.820804 | orchestrator | Sunday 23 November 2025 00:38:12 +0000 (0:00:00.172) 0:00:13.686 ******* 2025-11-23 00:38:18.820817 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.820829 | orchestrator | 2025-11-23 00:38:18.820842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.820889 | orchestrator | Sunday 23 November 2025 00:38:12 +0000 (0:00:00.158) 0:00:13.844 ******* 2025-11-23 00:38:18.820910 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.820940 | orchestrator | 2025-11-23 00:38:18.820958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.820978 | orchestrator | Sunday 23 November 2025 00:38:13 +0000 (0:00:00.178) 0:00:14.023 ******* 2025-11-23 00:38:18.820995 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.821018 | orchestrator | 2025-11-23 00:38:18.821041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821059 | orchestrator | Sunday 23 November 2025 00:38:13 +0000 (0:00:00.413) 0:00:14.437 ******* 2025-11-23 00:38:18.821077 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.821094 | orchestrator | 2025-11-23 00:38:18.821135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821155 | orchestrator | Sunday 23 November 2025 00:38:13 +0000 (0:00:00.189) 0:00:14.627 ******* 2025-11-23 00:38:18.821173 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.821191 | orchestrator | 2025-11-23 00:38:18.821209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821229 | orchestrator | Sunday 23 November 2025 00:38:13 +0000 (0:00:00.197) 0:00:14.824 ******* 2025-11-23 00:38:18.821249 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.821269 | orchestrator | 2025-11-23 00:38:18.821288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821306 | orchestrator | Sunday 23 November 2025 00:38:14 +0000 (0:00:00.237) 0:00:15.061 ******* 2025-11-23 00:38:18.821324 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78) 2025-11-23 00:38:18.821344 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78) 2025-11-23 00:38:18.821363 | orchestrator | 2025-11-23 00:38:18.821384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821403 | orchestrator | Sunday 23 November 2025 00:38:14 +0000 (0:00:00.361) 0:00:15.422 ******* 2025-11-23 00:38:18.821421 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1) 2025-11-23 00:38:18.821511 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1) 2025-11-23 00:38:18.821523 | orchestrator | 2025-11-23 00:38:18.821534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821544 | orchestrator | Sunday 23 November 2025 00:38:14 +0000 (0:00:00.377) 0:00:15.800 ******* 2025-11-23 00:38:18.821555 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4) 2025-11-23 00:38:18.821566 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4) 2025-11-23 00:38:18.821577 | orchestrator | 2025-11-23 00:38:18.821587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821623 | orchestrator | Sunday 23 November 2025 00:38:15 +0000 (0:00:00.391) 0:00:16.192 ******* 2025-11-23 00:38:18.821635 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f) 2025-11-23 00:38:18.821645 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f) 2025-11-23 00:38:18.821657 | orchestrator | 2025-11-23 00:38:18.821667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:18.821678 | orchestrator | Sunday 23 November 2025 00:38:15 +0000 (0:00:00.360) 0:00:16.552 ******* 2025-11-23 00:38:18.821689 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-23 00:38:18.821700 | orchestrator | 2025-11-23 00:38:18.821710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.821721 | orchestrator | Sunday 23 November 2025 00:38:15 +0000 (0:00:00.282) 0:00:16.835 ******* 2025-11-23 00:38:18.821745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-23 00:38:18.821757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-23 00:38:18.821767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-23 00:38:18.821778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-23 00:38:18.821788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-23 00:38:18.821799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-23 00:38:18.821809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-23 00:38:18.821820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-23 00:38:18.821830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-23 00:38:18.821841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-23 00:38:18.821851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-23 00:38:18.821862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-23 00:38:18.821872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-23 00:38:18.821883 | orchestrator | 2025-11-23 00:38:18.821894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.821904 | orchestrator | Sunday 23 November 2025 00:38:16 +0000 (0:00:00.318) 0:00:17.153 ******* 2025-11-23 00:38:18.821915 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.821925 | orchestrator | 2025-11-23 00:38:18.821936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.821954 | orchestrator | Sunday 23 November 2025 00:38:16 +0000 (0:00:00.485) 0:00:17.639 ******* 2025-11-23 00:38:18.821965 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.821976 | orchestrator | 2025-11-23 00:38:18.821986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.821997 | orchestrator | Sunday 23 November 2025 00:38:16 +0000 (0:00:00.192) 0:00:17.831 ******* 2025-11-23 00:38:18.822008 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.822100 | orchestrator | 2025-11-23 00:38:18.822129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.822147 | orchestrator | Sunday 23 November 2025 00:38:17 +0000 (0:00:00.165) 0:00:17.997 ******* 2025-11-23 00:38:18.822166 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.822222 | orchestrator | 2025-11-23 00:38:18.822241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.822260 | orchestrator | Sunday 23 November 2025 00:38:17 +0000 (0:00:00.161) 0:00:18.158 ******* 2025-11-23 00:38:18.822279 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.822298 | orchestrator | 2025-11-23 00:38:18.822317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.822332 | orchestrator | Sunday 23 November 2025 00:38:17 +0000 (0:00:00.174) 0:00:18.332 ******* 2025-11-23 00:38:18.822343 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.822354 | orchestrator | 2025-11-23 00:38:18.822364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.822375 | orchestrator | Sunday 23 November 2025 00:38:17 +0000 (0:00:00.175) 0:00:18.507 ******* 2025-11-23 00:38:18.822386 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.822397 | orchestrator | 2025-11-23 00:38:18.822408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.822418 | orchestrator | Sunday 23 November 2025 00:38:17 +0000 (0:00:00.184) 0:00:18.692 ******* 2025-11-23 00:38:18.822467 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:18.822479 | orchestrator | 2025-11-23 00:38:18.822490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.822501 | orchestrator | Sunday 23 November 2025 00:38:17 +0000 (0:00:00.182) 0:00:18.875 ******* 2025-11-23 00:38:18.822511 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-23 00:38:18.822523 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-23 00:38:18.822535 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-23 00:38:18.822545 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-23 00:38:18.822556 | orchestrator | 2025-11-23 00:38:18.822567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:18.822578 | orchestrator | Sunday 23 November 2025 00:38:18 +0000 (0:00:00.694) 0:00:19.570 ******* 2025-11-23 00:38:18.822589 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.057580 | orchestrator | 2025-11-23 00:38:24.057686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:24.057703 | orchestrator | Sunday 23 November 2025 00:38:18 +0000 (0:00:00.169) 0:00:19.739 ******* 2025-11-23 00:38:24.057715 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.057727 | orchestrator | 2025-11-23 00:38:24.057739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:24.057750 | orchestrator | Sunday 23 November 2025 00:38:18 +0000 (0:00:00.166) 0:00:19.906 ******* 2025-11-23 00:38:24.057761 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.057772 | orchestrator | 2025-11-23 00:38:24.057784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:24.057795 | orchestrator | Sunday 23 November 2025 00:38:19 +0000 (0:00:00.172) 0:00:20.079 ******* 2025-11-23 00:38:24.057805 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.057816 | orchestrator | 2025-11-23 00:38:24.057827 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-23 00:38:24.057838 | orchestrator | Sunday 23 November 2025 00:38:19 +0000 (0:00:00.467) 0:00:20.546 ******* 2025-11-23 00:38:24.057849 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-11-23 00:38:24.057860 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-11-23 00:38:24.057871 | orchestrator | 2025-11-23 00:38:24.057881 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-23 00:38:24.057892 | orchestrator | Sunday 23 November 2025 00:38:19 +0000 (0:00:00.146) 0:00:20.692 ******* 2025-11-23 00:38:24.057903 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.057915 | orchestrator | 2025-11-23 00:38:24.057926 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-23 00:38:24.057937 | orchestrator | Sunday 23 November 2025 00:38:19 +0000 (0:00:00.108) 0:00:20.801 ******* 2025-11-23 00:38:24.057948 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.057958 | orchestrator | 2025-11-23 00:38:24.057969 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-23 00:38:24.057980 | orchestrator | Sunday 23 November 2025 00:38:19 +0000 (0:00:00.116) 0:00:20.918 ******* 2025-11-23 00:38:24.057990 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058001 | orchestrator | 2025-11-23 00:38:24.058012 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-23 00:38:24.058077 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.110) 0:00:21.029 ******* 2025-11-23 00:38:24.058089 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:38:24.058102 | orchestrator | 2025-11-23 00:38:24.058113 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-23 00:38:24.058124 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.112) 0:00:21.141 ******* 2025-11-23 00:38:24.058136 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c884493c-7b6c-5149-8c24-d999b26a8d07'}}) 2025-11-23 00:38:24.058147 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1076031f-9245-50d5-902f-2c37ef490a74'}}) 2025-11-23 00:38:24.058183 | orchestrator | 2025-11-23 00:38:24.058195 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-23 00:38:24.058206 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.137) 0:00:21.279 ******* 2025-11-23 00:38:24.058218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c884493c-7b6c-5149-8c24-d999b26a8d07'}})  2025-11-23 00:38:24.058247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1076031f-9245-50d5-902f-2c37ef490a74'}})  2025-11-23 00:38:24.058259 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058270 | orchestrator | 2025-11-23 00:38:24.058281 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-23 00:38:24.058292 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.125) 0:00:21.405 ******* 2025-11-23 00:38:24.058303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c884493c-7b6c-5149-8c24-d999b26a8d07'}})  2025-11-23 00:38:24.058313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1076031f-9245-50d5-902f-2c37ef490a74'}})  2025-11-23 00:38:24.058324 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058335 | orchestrator | 2025-11-23 00:38:24.058346 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-23 00:38:24.058357 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.124) 0:00:21.529 ******* 2025-11-23 00:38:24.058368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c884493c-7b6c-5149-8c24-d999b26a8d07'}})  2025-11-23 00:38:24.058379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1076031f-9245-50d5-902f-2c37ef490a74'}})  2025-11-23 00:38:24.058390 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058401 | orchestrator | 2025-11-23 00:38:24.058411 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-23 00:38:24.058422 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.128) 0:00:21.658 ******* 2025-11-23 00:38:24.058505 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:38:24.058518 | orchestrator | 2025-11-23 00:38:24.058529 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-23 00:38:24.058541 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.114) 0:00:21.772 ******* 2025-11-23 00:38:24.058551 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:38:24.058562 | orchestrator | 2025-11-23 00:38:24.058573 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-23 00:38:24.058584 | orchestrator | Sunday 23 November 2025 00:38:20 +0000 (0:00:00.110) 0:00:21.883 ******* 2025-11-23 00:38:24.058613 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058625 | orchestrator | 2025-11-23 00:38:24.058636 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-23 00:38:24.058646 | orchestrator | Sunday 23 November 2025 00:38:21 +0000 (0:00:00.231) 0:00:22.115 ******* 2025-11-23 00:38:24.058657 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058668 | orchestrator | 2025-11-23 00:38:24.058679 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-23 00:38:24.058689 | orchestrator | Sunday 23 November 2025 00:38:21 +0000 (0:00:00.102) 0:00:22.217 ******* 2025-11-23 00:38:24.058700 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058711 | orchestrator | 2025-11-23 00:38:24.058721 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-23 00:38:24.058732 | orchestrator | Sunday 23 November 2025 00:38:21 +0000 (0:00:00.111) 0:00:22.329 ******* 2025-11-23 00:38:24.058743 | orchestrator | ok: [testbed-node-4] => { 2025-11-23 00:38:24.058754 | orchestrator |  "ceph_osd_devices": { 2025-11-23 00:38:24.058765 | orchestrator |  "sdb": { 2025-11-23 00:38:24.058776 | orchestrator |  "osd_lvm_uuid": "c884493c-7b6c-5149-8c24-d999b26a8d07" 2025-11-23 00:38:24.058797 | orchestrator |  }, 2025-11-23 00:38:24.058808 | orchestrator |  "sdc": { 2025-11-23 00:38:24.058819 | orchestrator |  "osd_lvm_uuid": "1076031f-9245-50d5-902f-2c37ef490a74" 2025-11-23 00:38:24.058830 | orchestrator |  } 2025-11-23 00:38:24.058841 | orchestrator |  } 2025-11-23 00:38:24.058852 | orchestrator | } 2025-11-23 00:38:24.058863 | orchestrator | 2025-11-23 00:38:24.058874 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-23 00:38:24.058884 | orchestrator | Sunday 23 November 2025 00:38:21 +0000 (0:00:00.115) 0:00:22.445 ******* 2025-11-23 00:38:24.058895 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058906 | orchestrator | 2025-11-23 00:38:24.058917 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-23 00:38:24.058927 | orchestrator | Sunday 23 November 2025 00:38:21 +0000 (0:00:00.089) 0:00:22.534 ******* 2025-11-23 00:38:24.058938 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058949 | orchestrator | 2025-11-23 00:38:24.058959 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-23 00:38:24.058970 | orchestrator | Sunday 23 November 2025 00:38:21 +0000 (0:00:00.091) 0:00:22.625 ******* 2025-11-23 00:38:24.058981 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:38:24.058992 | orchestrator | 2025-11-23 00:38:24.059002 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-23 00:38:24.059013 | orchestrator | Sunday 23 November 2025 00:38:21 +0000 (0:00:00.112) 0:00:22.737 ******* 2025-11-23 00:38:24.059024 | orchestrator | changed: [testbed-node-4] => { 2025-11-23 00:38:24.059035 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-23 00:38:24.059046 | orchestrator |  "ceph_osd_devices": { 2025-11-23 00:38:24.059057 | orchestrator |  "sdb": { 2025-11-23 00:38:24.059068 | orchestrator |  "osd_lvm_uuid": "c884493c-7b6c-5149-8c24-d999b26a8d07" 2025-11-23 00:38:24.059079 | orchestrator |  }, 2025-11-23 00:38:24.059090 | orchestrator |  "sdc": { 2025-11-23 00:38:24.059101 | orchestrator |  "osd_lvm_uuid": "1076031f-9245-50d5-902f-2c37ef490a74" 2025-11-23 00:38:24.059111 | orchestrator |  } 2025-11-23 00:38:24.059122 | orchestrator |  }, 2025-11-23 00:38:24.059133 | orchestrator |  "lvm_volumes": [ 2025-11-23 00:38:24.059144 | orchestrator |  { 2025-11-23 00:38:24.059155 | orchestrator |  "data": "osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07", 2025-11-23 00:38:24.059166 | orchestrator |  "data_vg": "ceph-c884493c-7b6c-5149-8c24-d999b26a8d07" 2025-11-23 00:38:24.059177 | orchestrator |  }, 2025-11-23 00:38:24.059187 | orchestrator |  { 2025-11-23 00:38:24.059198 | orchestrator |  "data": "osd-block-1076031f-9245-50d5-902f-2c37ef490a74", 2025-11-23 00:38:24.059209 | orchestrator |  "data_vg": "ceph-1076031f-9245-50d5-902f-2c37ef490a74" 2025-11-23 00:38:24.059220 | orchestrator |  } 2025-11-23 00:38:24.059231 | orchestrator |  ] 2025-11-23 00:38:24.059241 | orchestrator |  } 2025-11-23 00:38:24.059252 | orchestrator | } 2025-11-23 00:38:24.059263 | orchestrator | 2025-11-23 00:38:24.059274 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-23 00:38:24.059285 | orchestrator | Sunday 23 November 2025 00:38:22 +0000 (0:00:00.188) 0:00:22.926 ******* 2025-11-23 00:38:24.059296 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-23 00:38:24.059306 | orchestrator | 2025-11-23 00:38:24.059317 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-23 00:38:24.059328 | orchestrator | 2025-11-23 00:38:24.059339 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-23 00:38:24.059350 | orchestrator | Sunday 23 November 2025 00:38:22 +0000 (0:00:00.872) 0:00:23.798 ******* 2025-11-23 00:38:24.059360 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-23 00:38:24.059371 | orchestrator | 2025-11-23 00:38:24.059382 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-23 00:38:24.059406 | orchestrator | Sunday 23 November 2025 00:38:23 +0000 (0:00:00.522) 0:00:24.321 ******* 2025-11-23 00:38:24.059417 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:38:24.059462 | orchestrator | 2025-11-23 00:38:24.059477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:24.059488 | orchestrator | Sunday 23 November 2025 00:38:23 +0000 (0:00:00.273) 0:00:24.594 ******* 2025-11-23 00:38:24.059498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-23 00:38:24.059509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-23 00:38:24.059520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-23 00:38:24.059531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-23 00:38:24.059542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-23 00:38:24.059560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-23 00:38:30.580922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-23 00:38:30.580993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-23 00:38:30.580999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-23 00:38:30.581004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-23 00:38:30.581008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-23 00:38:30.581012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-23 00:38:30.581016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-23 00:38:30.581020 | orchestrator | 2025-11-23 00:38:30.581025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581029 | orchestrator | Sunday 23 November 2025 00:38:24 +0000 (0:00:00.378) 0:00:24.972 ******* 2025-11-23 00:38:30.581033 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581038 | orchestrator | 2025-11-23 00:38:30.581042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581046 | orchestrator | Sunday 23 November 2025 00:38:24 +0000 (0:00:00.144) 0:00:25.117 ******* 2025-11-23 00:38:30.581050 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581053 | orchestrator | 2025-11-23 00:38:30.581057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581061 | orchestrator | Sunday 23 November 2025 00:38:24 +0000 (0:00:00.138) 0:00:25.256 ******* 2025-11-23 00:38:30.581065 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581069 | orchestrator | 2025-11-23 00:38:30.581072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581076 | orchestrator | Sunday 23 November 2025 00:38:24 +0000 (0:00:00.136) 0:00:25.392 ******* 2025-11-23 00:38:30.581080 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581084 | orchestrator | 2025-11-23 00:38:30.581088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581091 | orchestrator | Sunday 23 November 2025 00:38:24 +0000 (0:00:00.156) 0:00:25.549 ******* 2025-11-23 00:38:30.581095 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581099 | orchestrator | 2025-11-23 00:38:30.581103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581110 | orchestrator | Sunday 23 November 2025 00:38:24 +0000 (0:00:00.149) 0:00:25.698 ******* 2025-11-23 00:38:30.581116 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581123 | orchestrator | 2025-11-23 00:38:30.581129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581155 | orchestrator | Sunday 23 November 2025 00:38:24 +0000 (0:00:00.138) 0:00:25.836 ******* 2025-11-23 00:38:30.581161 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581167 | orchestrator | 2025-11-23 00:38:30.581173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581179 | orchestrator | Sunday 23 November 2025 00:38:25 +0000 (0:00:00.139) 0:00:25.976 ******* 2025-11-23 00:38:30.581186 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581192 | orchestrator | 2025-11-23 00:38:30.581199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581206 | orchestrator | Sunday 23 November 2025 00:38:25 +0000 (0:00:00.135) 0:00:26.112 ******* 2025-11-23 00:38:30.581213 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67) 2025-11-23 00:38:30.581220 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67) 2025-11-23 00:38:30.581226 | orchestrator | 2025-11-23 00:38:30.581230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581234 | orchestrator | Sunday 23 November 2025 00:38:25 +0000 (0:00:00.627) 0:00:26.739 ******* 2025-11-23 00:38:30.581238 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa) 2025-11-23 00:38:30.581242 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa) 2025-11-23 00:38:30.581246 | orchestrator | 2025-11-23 00:38:30.581250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581253 | orchestrator | Sunday 23 November 2025 00:38:26 +0000 (0:00:00.398) 0:00:27.138 ******* 2025-11-23 00:38:30.581257 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b) 2025-11-23 00:38:30.581260 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b) 2025-11-23 00:38:30.581264 | orchestrator | 2025-11-23 00:38:30.581268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581271 | orchestrator | Sunday 23 November 2025 00:38:26 +0000 (0:00:00.336) 0:00:27.474 ******* 2025-11-23 00:38:30.581275 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f) 2025-11-23 00:38:30.581279 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f) 2025-11-23 00:38:30.581282 | orchestrator | 2025-11-23 00:38:30.581286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:38:30.581290 | orchestrator | Sunday 23 November 2025 00:38:26 +0000 (0:00:00.354) 0:00:27.828 ******* 2025-11-23 00:38:30.581293 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-23 00:38:30.581297 | orchestrator | 2025-11-23 00:38:30.581301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581314 | orchestrator | Sunday 23 November 2025 00:38:27 +0000 (0:00:00.294) 0:00:28.123 ******* 2025-11-23 00:38:30.581318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-23 00:38:30.581322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-23 00:38:30.581326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-23 00:38:30.581330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-23 00:38:30.581333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-23 00:38:30.581351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-23 00:38:30.581356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-23 00:38:30.581359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-23 00:38:30.581367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-23 00:38:30.581371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-23 00:38:30.581375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-23 00:38:30.581378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-23 00:38:30.581382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-23 00:38:30.581386 | orchestrator | 2025-11-23 00:38:30.581389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581393 | orchestrator | Sunday 23 November 2025 00:38:27 +0000 (0:00:00.333) 0:00:28.456 ******* 2025-11-23 00:38:30.581397 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581400 | orchestrator | 2025-11-23 00:38:30.581404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581408 | orchestrator | Sunday 23 November 2025 00:38:27 +0000 (0:00:00.144) 0:00:28.601 ******* 2025-11-23 00:38:30.581411 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581415 | orchestrator | 2025-11-23 00:38:30.581418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581425 | orchestrator | Sunday 23 November 2025 00:38:27 +0000 (0:00:00.164) 0:00:28.766 ******* 2025-11-23 00:38:30.581429 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581475 | orchestrator | 2025-11-23 00:38:30.581480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581484 | orchestrator | Sunday 23 November 2025 00:38:28 +0000 (0:00:00.169) 0:00:28.936 ******* 2025-11-23 00:38:30.581488 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581492 | orchestrator | 2025-11-23 00:38:30.581496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581528 | orchestrator | Sunday 23 November 2025 00:38:28 +0000 (0:00:00.165) 0:00:29.102 ******* 2025-11-23 00:38:30.581532 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581537 | orchestrator | 2025-11-23 00:38:30.581541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581545 | orchestrator | Sunday 23 November 2025 00:38:28 +0000 (0:00:00.215) 0:00:29.317 ******* 2025-11-23 00:38:30.581549 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581553 | orchestrator | 2025-11-23 00:38:30.581557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581562 | orchestrator | Sunday 23 November 2025 00:38:28 +0000 (0:00:00.459) 0:00:29.777 ******* 2025-11-23 00:38:30.581566 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581570 | orchestrator | 2025-11-23 00:38:30.581574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581579 | orchestrator | Sunday 23 November 2025 00:38:29 +0000 (0:00:00.171) 0:00:29.949 ******* 2025-11-23 00:38:30.581583 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581587 | orchestrator | 2025-11-23 00:38:30.581591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581595 | orchestrator | Sunday 23 November 2025 00:38:29 +0000 (0:00:00.163) 0:00:30.112 ******* 2025-11-23 00:38:30.581599 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-23 00:38:30.581604 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-23 00:38:30.581609 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-23 00:38:30.581613 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-23 00:38:30.581617 | orchestrator | 2025-11-23 00:38:30.581621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581626 | orchestrator | Sunday 23 November 2025 00:38:29 +0000 (0:00:00.582) 0:00:30.695 ******* 2025-11-23 00:38:30.581643 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581651 | orchestrator | 2025-11-23 00:38:30.581655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581659 | orchestrator | Sunday 23 November 2025 00:38:29 +0000 (0:00:00.180) 0:00:30.875 ******* 2025-11-23 00:38:30.581664 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581668 | orchestrator | 2025-11-23 00:38:30.581672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581676 | orchestrator | Sunday 23 November 2025 00:38:30 +0000 (0:00:00.182) 0:00:31.058 ******* 2025-11-23 00:38:30.581681 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581685 | orchestrator | 2025-11-23 00:38:30.581689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:38:30.581694 | orchestrator | Sunday 23 November 2025 00:38:30 +0000 (0:00:00.194) 0:00:31.252 ******* 2025-11-23 00:38:30.581698 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:30.581745 | orchestrator | 2025-11-23 00:38:30.581760 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-23 00:38:34.455980 | orchestrator | Sunday 23 November 2025 00:38:30 +0000 (0:00:00.244) 0:00:31.497 ******* 2025-11-23 00:38:34.456108 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-11-23 00:38:34.456133 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-11-23 00:38:34.456154 | orchestrator | 2025-11-23 00:38:34.457042 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-23 00:38:34.457066 | orchestrator | Sunday 23 November 2025 00:38:30 +0000 (0:00:00.158) 0:00:31.656 ******* 2025-11-23 00:38:34.457078 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457089 | orchestrator | 2025-11-23 00:38:34.457101 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-23 00:38:34.457112 | orchestrator | Sunday 23 November 2025 00:38:30 +0000 (0:00:00.125) 0:00:31.781 ******* 2025-11-23 00:38:34.457122 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457134 | orchestrator | 2025-11-23 00:38:34.457144 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-23 00:38:34.457155 | orchestrator | Sunday 23 November 2025 00:38:30 +0000 (0:00:00.134) 0:00:31.916 ******* 2025-11-23 00:38:34.457166 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457176 | orchestrator | 2025-11-23 00:38:34.457187 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-23 00:38:34.457198 | orchestrator | Sunday 23 November 2025 00:38:31 +0000 (0:00:00.263) 0:00:32.180 ******* 2025-11-23 00:38:34.457209 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:38:34.457220 | orchestrator | 2025-11-23 00:38:34.457231 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-23 00:38:34.457243 | orchestrator | Sunday 23 November 2025 00:38:31 +0000 (0:00:00.122) 0:00:32.302 ******* 2025-11-23 00:38:34.457254 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77b7216-a915-581b-8f3c-a7fc3e50862f'}}) 2025-11-23 00:38:34.457266 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}}) 2025-11-23 00:38:34.457285 | orchestrator | 2025-11-23 00:38:34.457305 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-23 00:38:34.457325 | orchestrator | Sunday 23 November 2025 00:38:31 +0000 (0:00:00.176) 0:00:32.479 ******* 2025-11-23 00:38:34.457345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77b7216-a915-581b-8f3c-a7fc3e50862f'}})  2025-11-23 00:38:34.457365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}})  2025-11-23 00:38:34.457383 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457402 | orchestrator | 2025-11-23 00:38:34.457422 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-23 00:38:34.457463 | orchestrator | Sunday 23 November 2025 00:38:31 +0000 (0:00:00.135) 0:00:32.614 ******* 2025-11-23 00:38:34.457517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77b7216-a915-581b-8f3c-a7fc3e50862f'}})  2025-11-23 00:38:34.457536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}})  2025-11-23 00:38:34.457554 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457574 | orchestrator | 2025-11-23 00:38:34.457593 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-23 00:38:34.457611 | orchestrator | Sunday 23 November 2025 00:38:31 +0000 (0:00:00.131) 0:00:32.746 ******* 2025-11-23 00:38:34.457644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77b7216-a915-581b-8f3c-a7fc3e50862f'}})  2025-11-23 00:38:34.457656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}})  2025-11-23 00:38:34.457667 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457677 | orchestrator | 2025-11-23 00:38:34.457688 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-23 00:38:34.457699 | orchestrator | Sunday 23 November 2025 00:38:31 +0000 (0:00:00.138) 0:00:32.884 ******* 2025-11-23 00:38:34.457709 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:38:34.457720 | orchestrator | 2025-11-23 00:38:34.457730 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-23 00:38:34.457741 | orchestrator | Sunday 23 November 2025 00:38:32 +0000 (0:00:00.114) 0:00:32.998 ******* 2025-11-23 00:38:34.457752 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:38:34.457762 | orchestrator | 2025-11-23 00:38:34.457773 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-23 00:38:34.457783 | orchestrator | Sunday 23 November 2025 00:38:32 +0000 (0:00:00.119) 0:00:33.118 ******* 2025-11-23 00:38:34.457794 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457804 | orchestrator | 2025-11-23 00:38:34.457815 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-23 00:38:34.457826 | orchestrator | Sunday 23 November 2025 00:38:32 +0000 (0:00:00.154) 0:00:33.273 ******* 2025-11-23 00:38:34.457837 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457847 | orchestrator | 2025-11-23 00:38:34.457858 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-23 00:38:34.457868 | orchestrator | Sunday 23 November 2025 00:38:32 +0000 (0:00:00.130) 0:00:33.404 ******* 2025-11-23 00:38:34.457878 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.457889 | orchestrator | 2025-11-23 00:38:34.457899 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-23 00:38:34.457910 | orchestrator | Sunday 23 November 2025 00:38:32 +0000 (0:00:00.110) 0:00:33.515 ******* 2025-11-23 00:38:34.457921 | orchestrator | ok: [testbed-node-5] => { 2025-11-23 00:38:34.457932 | orchestrator |  "ceph_osd_devices": { 2025-11-23 00:38:34.457943 | orchestrator |  "sdb": { 2025-11-23 00:38:34.457975 | orchestrator |  "osd_lvm_uuid": "e77b7216-a915-581b-8f3c-a7fc3e50862f" 2025-11-23 00:38:34.457987 | orchestrator |  }, 2025-11-23 00:38:34.457998 | orchestrator |  "sdc": { 2025-11-23 00:38:34.458009 | orchestrator |  "osd_lvm_uuid": "889c1fef-e00e-5a44-b704-8d22cfa7cd7a" 2025-11-23 00:38:34.458091 | orchestrator |  } 2025-11-23 00:38:34.458106 | orchestrator |  } 2025-11-23 00:38:34.458118 | orchestrator | } 2025-11-23 00:38:34.458129 | orchestrator | 2025-11-23 00:38:34.458140 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-23 00:38:34.458150 | orchestrator | Sunday 23 November 2025 00:38:32 +0000 (0:00:00.115) 0:00:33.630 ******* 2025-11-23 00:38:34.458161 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.458171 | orchestrator | 2025-11-23 00:38:34.458182 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-23 00:38:34.458193 | orchestrator | Sunday 23 November 2025 00:38:32 +0000 (0:00:00.110) 0:00:33.740 ******* 2025-11-23 00:38:34.458216 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.458226 | orchestrator | 2025-11-23 00:38:34.458237 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-23 00:38:34.458247 | orchestrator | Sunday 23 November 2025 00:38:33 +0000 (0:00:00.281) 0:00:34.021 ******* 2025-11-23 00:38:34.458258 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:38:34.458268 | orchestrator | 2025-11-23 00:38:34.458279 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-23 00:38:34.458290 | orchestrator | Sunday 23 November 2025 00:38:33 +0000 (0:00:00.106) 0:00:34.128 ******* 2025-11-23 00:38:34.458300 | orchestrator | changed: [testbed-node-5] => { 2025-11-23 00:38:34.458311 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-23 00:38:34.458322 | orchestrator |  "ceph_osd_devices": { 2025-11-23 00:38:34.458332 | orchestrator |  "sdb": { 2025-11-23 00:38:34.458343 | orchestrator |  "osd_lvm_uuid": "e77b7216-a915-581b-8f3c-a7fc3e50862f" 2025-11-23 00:38:34.458354 | orchestrator |  }, 2025-11-23 00:38:34.458364 | orchestrator |  "sdc": { 2025-11-23 00:38:34.458375 | orchestrator |  "osd_lvm_uuid": "889c1fef-e00e-5a44-b704-8d22cfa7cd7a" 2025-11-23 00:38:34.458385 | orchestrator |  } 2025-11-23 00:38:34.458396 | orchestrator |  }, 2025-11-23 00:38:34.458407 | orchestrator |  "lvm_volumes": [ 2025-11-23 00:38:34.458417 | orchestrator |  { 2025-11-23 00:38:34.458428 | orchestrator |  "data": "osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f", 2025-11-23 00:38:34.458470 | orchestrator |  "data_vg": "ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f" 2025-11-23 00:38:34.458482 | orchestrator |  }, 2025-11-23 00:38:34.458492 | orchestrator |  { 2025-11-23 00:38:34.458503 | orchestrator |  "data": "osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a", 2025-11-23 00:38:34.458514 | orchestrator |  "data_vg": "ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a" 2025-11-23 00:38:34.458524 | orchestrator |  } 2025-11-23 00:38:34.458540 | orchestrator |  ] 2025-11-23 00:38:34.458551 | orchestrator |  } 2025-11-23 00:38:34.458562 | orchestrator | } 2025-11-23 00:38:34.458573 | orchestrator | 2025-11-23 00:38:34.458583 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-23 00:38:34.458594 | orchestrator | Sunday 23 November 2025 00:38:33 +0000 (0:00:00.183) 0:00:34.311 ******* 2025-11-23 00:38:34.458604 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-23 00:38:34.458615 | orchestrator | 2025-11-23 00:38:34.458626 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:38:34.458637 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-23 00:38:34.458649 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-23 00:38:34.458660 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-23 00:38:34.458670 | orchestrator | 2025-11-23 00:38:34.458681 | orchestrator | 2025-11-23 00:38:34.458692 | orchestrator | 2025-11-23 00:38:34.458703 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:38:34.458713 | orchestrator | Sunday 23 November 2025 00:38:34 +0000 (0:00:01.005) 0:00:35.316 ******* 2025-11-23 00:38:34.458724 | orchestrator | =============================================================================== 2025-11-23 00:38:34.458734 | orchestrator | Write configuration file ------------------------------------------------ 3.52s 2025-11-23 00:38:34.458745 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-11-23 00:38:34.458756 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-11-23 00:38:34.458766 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2025-11-23 00:38:34.458784 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-11-23 00:38:34.458795 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-11-23 00:38:34.458805 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-11-23 00:38:34.458816 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2025-11-23 00:38:34.458826 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-11-23 00:38:34.458837 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-11-23 00:38:34.458847 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-11-23 00:38:34.458858 | orchestrator | Set DB devices config data ---------------------------------------------- 0.51s 2025-11-23 00:38:34.458868 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.50s 2025-11-23 00:38:34.458888 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.50s 2025-11-23 00:38:34.746926 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2025-11-23 00:38:34.747027 | orchestrator | Print DB devices -------------------------------------------------------- 0.50s 2025-11-23 00:38:34.747041 | orchestrator | Add known links to the list of available block devices ------------------ 0.49s 2025-11-23 00:38:34.747053 | orchestrator | Add known partitions to the list of available block devices ------------- 0.49s 2025-11-23 00:38:34.747064 | orchestrator | Add known partitions to the list of available block devices ------------- 0.47s 2025-11-23 00:38:34.747080 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.46s 2025-11-23 00:38:57.074255 | orchestrator | 2025-11-23 00:38:57 | INFO  | Task 6231d401-7e67-45bd-a157-089e29fc9320 (sync inventory) is running in background. Output coming soon. 2025-11-23 00:39:20.777183 | orchestrator | 2025-11-23 00:38:58 | INFO  | Starting group_vars file reorganization 2025-11-23 00:39:20.777260 | orchestrator | 2025-11-23 00:38:58 | INFO  | Moved 0 file(s) to their respective directories 2025-11-23 00:39:20.777267 | orchestrator | 2025-11-23 00:38:58 | INFO  | Group_vars file reorganization completed 2025-11-23 00:39:20.777272 | orchestrator | 2025-11-23 00:39:01 | INFO  | Starting variable preparation from inventory 2025-11-23 00:39:20.777276 | orchestrator | 2025-11-23 00:39:03 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-11-23 00:39:20.777281 | orchestrator | 2025-11-23 00:39:03 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-11-23 00:39:20.777300 | orchestrator | 2025-11-23 00:39:03 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-11-23 00:39:20.777304 | orchestrator | 2025-11-23 00:39:03 | INFO  | 3 file(s) written, 6 host(s) processed 2025-11-23 00:39:20.777309 | orchestrator | 2025-11-23 00:39:03 | INFO  | Variable preparation completed 2025-11-23 00:39:20.777314 | orchestrator | 2025-11-23 00:39:04 | INFO  | Starting inventory overwrite handling 2025-11-23 00:39:20.777321 | orchestrator | 2025-11-23 00:39:04 | INFO  | Handling group overwrites in 99-overwrite 2025-11-23 00:39:20.777325 | orchestrator | 2025-11-23 00:39:04 | INFO  | Removing group frr:children from 60-generic 2025-11-23 00:39:20.777329 | orchestrator | 2025-11-23 00:39:04 | INFO  | Removing group storage:children from 50-kolla 2025-11-23 00:39:20.777333 | orchestrator | 2025-11-23 00:39:04 | INFO  | Removing group netbird:children from 50-infrastructure 2025-11-23 00:39:20.777337 | orchestrator | 2025-11-23 00:39:04 | INFO  | Removing group ceph-mds from 50-ceph 2025-11-23 00:39:20.777342 | orchestrator | 2025-11-23 00:39:04 | INFO  | Removing group ceph-rgw from 50-ceph 2025-11-23 00:39:20.777359 | orchestrator | 2025-11-23 00:39:04 | INFO  | Handling group overwrites in 20-roles 2025-11-23 00:39:20.777363 | orchestrator | 2025-11-23 00:39:04 | INFO  | Removing group k3s_node from 50-infrastructure 2025-11-23 00:39:20.777367 | orchestrator | 2025-11-23 00:39:04 | INFO  | Removed 6 group(s) in total 2025-11-23 00:39:20.777371 | orchestrator | 2025-11-23 00:39:04 | INFO  | Inventory overwrite handling completed 2025-11-23 00:39:20.777375 | orchestrator | 2025-11-23 00:39:05 | INFO  | Starting merge of inventory files 2025-11-23 00:39:20.777379 | orchestrator | 2025-11-23 00:39:05 | INFO  | Inventory files merged successfully 2025-11-23 00:39:20.777383 | orchestrator | 2025-11-23 00:39:10 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-11-23 00:39:20.777387 | orchestrator | 2025-11-23 00:39:19 | INFO  | Successfully wrote ClusterShell configuration 2025-11-23 00:39:20.777391 | orchestrator | [master 57c2b46] 2025-11-23-00-39 2025-11-23 00:39:20.777396 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-11-23 00:39:22.420371 | orchestrator | 2025-11-23 00:39:22 | INFO  | Task cfc62782-423e-4eb4-8133-8bb08baa9788 (ceph-create-lvm-devices) was prepared for execution. 2025-11-23 00:39:22.420520 | orchestrator | 2025-11-23 00:39:22 | INFO  | It takes a moment until task cfc62782-423e-4eb4-8133-8bb08baa9788 (ceph-create-lvm-devices) has been started and output is visible here. 2025-11-23 00:39:31.998235 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-23 00:39:31.998313 | orchestrator | 2.16.14 2025-11-23 00:39:31.998321 | orchestrator | 2025-11-23 00:39:31.998326 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-23 00:39:31.998332 | orchestrator | 2025-11-23 00:39:31.998336 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-23 00:39:31.998340 | orchestrator | Sunday 23 November 2025 00:39:25 +0000 (0:00:00.224) 0:00:00.224 ******* 2025-11-23 00:39:31.998345 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-23 00:39:31.998349 | orchestrator | 2025-11-23 00:39:31.998353 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-23 00:39:31.998357 | orchestrator | Sunday 23 November 2025 00:39:25 +0000 (0:00:00.214) 0:00:00.438 ******* 2025-11-23 00:39:31.998360 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:31.998365 | orchestrator | 2025-11-23 00:39:31.998369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998373 | orchestrator | Sunday 23 November 2025 00:39:26 +0000 (0:00:00.155) 0:00:00.594 ******* 2025-11-23 00:39:31.998377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-23 00:39:31.998381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-23 00:39:31.998385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-23 00:39:31.998389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-23 00:39:31.998392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-23 00:39:31.998396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-23 00:39:31.998400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-23 00:39:31.998404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-23 00:39:31.998408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-23 00:39:31.998411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-23 00:39:31.998415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-23 00:39:31.998433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-23 00:39:31.998438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-23 00:39:31.998441 | orchestrator | 2025-11-23 00:39:31.998445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998449 | orchestrator | Sunday 23 November 2025 00:39:26 +0000 (0:00:00.394) 0:00:00.989 ******* 2025-11-23 00:39:31.998452 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998456 | orchestrator | 2025-11-23 00:39:31.998460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998464 | orchestrator | Sunday 23 November 2025 00:39:26 +0000 (0:00:00.175) 0:00:01.164 ******* 2025-11-23 00:39:31.998468 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998472 | orchestrator | 2025-11-23 00:39:31.998513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998519 | orchestrator | Sunday 23 November 2025 00:39:26 +0000 (0:00:00.166) 0:00:01.331 ******* 2025-11-23 00:39:31.998522 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998526 | orchestrator | 2025-11-23 00:39:31.998530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998534 | orchestrator | Sunday 23 November 2025 00:39:26 +0000 (0:00:00.162) 0:00:01.493 ******* 2025-11-23 00:39:31.998538 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998542 | orchestrator | 2025-11-23 00:39:31.998546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998550 | orchestrator | Sunday 23 November 2025 00:39:27 +0000 (0:00:00.184) 0:00:01.677 ******* 2025-11-23 00:39:31.998554 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998557 | orchestrator | 2025-11-23 00:39:31.998561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998565 | orchestrator | Sunday 23 November 2025 00:39:27 +0000 (0:00:00.155) 0:00:01.833 ******* 2025-11-23 00:39:31.998569 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998572 | orchestrator | 2025-11-23 00:39:31.998576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998580 | orchestrator | Sunday 23 November 2025 00:39:27 +0000 (0:00:00.185) 0:00:02.018 ******* 2025-11-23 00:39:31.998584 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998587 | orchestrator | 2025-11-23 00:39:31.998591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998595 | orchestrator | Sunday 23 November 2025 00:39:27 +0000 (0:00:00.176) 0:00:02.194 ******* 2025-11-23 00:39:31.998599 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998602 | orchestrator | 2025-11-23 00:39:31.998606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998610 | orchestrator | Sunday 23 November 2025 00:39:27 +0000 (0:00:00.192) 0:00:02.387 ******* 2025-11-23 00:39:31.998614 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1) 2025-11-23 00:39:31.998619 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1) 2025-11-23 00:39:31.998623 | orchestrator | 2025-11-23 00:39:31.998627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998640 | orchestrator | Sunday 23 November 2025 00:39:28 +0000 (0:00:00.384) 0:00:02.771 ******* 2025-11-23 00:39:31.998644 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0) 2025-11-23 00:39:31.998648 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0) 2025-11-23 00:39:31.998652 | orchestrator | 2025-11-23 00:39:31.998655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998659 | orchestrator | Sunday 23 November 2025 00:39:28 +0000 (0:00:00.528) 0:00:03.300 ******* 2025-11-23 00:39:31.998667 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356) 2025-11-23 00:39:31.998671 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356) 2025-11-23 00:39:31.998675 | orchestrator | 2025-11-23 00:39:31.998679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998682 | orchestrator | Sunday 23 November 2025 00:39:29 +0000 (0:00:00.580) 0:00:03.880 ******* 2025-11-23 00:39:31.998686 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f) 2025-11-23 00:39:31.998690 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f) 2025-11-23 00:39:31.998694 | orchestrator | 2025-11-23 00:39:31.998698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:31.998701 | orchestrator | Sunday 23 November 2025 00:39:30 +0000 (0:00:00.657) 0:00:04.537 ******* 2025-11-23 00:39:31.998705 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-23 00:39:31.998709 | orchestrator | 2025-11-23 00:39:31.998713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998716 | orchestrator | Sunday 23 November 2025 00:39:30 +0000 (0:00:00.317) 0:00:04.855 ******* 2025-11-23 00:39:31.998720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-23 00:39:31.998724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-23 00:39:31.998737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-23 00:39:31.998741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-23 00:39:31.998745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-23 00:39:31.998749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-23 00:39:31.998753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-23 00:39:31.998756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-23 00:39:31.998760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-23 00:39:31.998764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-23 00:39:31.998770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-23 00:39:31.998773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-23 00:39:31.998777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-23 00:39:31.998781 | orchestrator | 2025-11-23 00:39:31.998785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998788 | orchestrator | Sunday 23 November 2025 00:39:30 +0000 (0:00:00.374) 0:00:05.230 ******* 2025-11-23 00:39:31.998792 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998796 | orchestrator | 2025-11-23 00:39:31.998800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998803 | orchestrator | Sunday 23 November 2025 00:39:30 +0000 (0:00:00.184) 0:00:05.414 ******* 2025-11-23 00:39:31.998807 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998811 | orchestrator | 2025-11-23 00:39:31.998815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998818 | orchestrator | Sunday 23 November 2025 00:39:31 +0000 (0:00:00.176) 0:00:05.591 ******* 2025-11-23 00:39:31.998822 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998826 | orchestrator | 2025-11-23 00:39:31.998830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998836 | orchestrator | Sunday 23 November 2025 00:39:31 +0000 (0:00:00.191) 0:00:05.783 ******* 2025-11-23 00:39:31.998840 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998844 | orchestrator | 2025-11-23 00:39:31.998848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998852 | orchestrator | Sunday 23 November 2025 00:39:31 +0000 (0:00:00.179) 0:00:05.962 ******* 2025-11-23 00:39:31.998857 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998863 | orchestrator | 2025-11-23 00:39:31.998869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998875 | orchestrator | Sunday 23 November 2025 00:39:31 +0000 (0:00:00.177) 0:00:06.139 ******* 2025-11-23 00:39:31.998890 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998896 | orchestrator | 2025-11-23 00:39:31.998901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:31.998907 | orchestrator | Sunday 23 November 2025 00:39:31 +0000 (0:00:00.179) 0:00:06.319 ******* 2025-11-23 00:39:31.998913 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:31.998920 | orchestrator | 2025-11-23 00:39:31.998931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:39.207725 | orchestrator | Sunday 23 November 2025 00:39:31 +0000 (0:00:00.187) 0:00:06.506 ******* 2025-11-23 00:39:39.207838 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.207856 | orchestrator | 2025-11-23 00:39:39.207869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:39.207880 | orchestrator | Sunday 23 November 2025 00:39:32 +0000 (0:00:00.158) 0:00:06.665 ******* 2025-11-23 00:39:39.207891 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-23 00:39:39.207903 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-23 00:39:39.207915 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-23 00:39:39.207926 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-23 00:39:39.207937 | orchestrator | 2025-11-23 00:39:39.207948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:39.207959 | orchestrator | Sunday 23 November 2025 00:39:32 +0000 (0:00:00.832) 0:00:07.497 ******* 2025-11-23 00:39:39.207970 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.207981 | orchestrator | 2025-11-23 00:39:39.207992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:39.208003 | orchestrator | Sunday 23 November 2025 00:39:33 +0000 (0:00:00.188) 0:00:07.686 ******* 2025-11-23 00:39:39.208013 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208024 | orchestrator | 2025-11-23 00:39:39.208036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:39.208047 | orchestrator | Sunday 23 November 2025 00:39:33 +0000 (0:00:00.187) 0:00:07.873 ******* 2025-11-23 00:39:39.208058 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208069 | orchestrator | 2025-11-23 00:39:39.208080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:39.208090 | orchestrator | Sunday 23 November 2025 00:39:33 +0000 (0:00:00.192) 0:00:08.066 ******* 2025-11-23 00:39:39.208101 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208112 | orchestrator | 2025-11-23 00:39:39.208123 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-23 00:39:39.208134 | orchestrator | Sunday 23 November 2025 00:39:33 +0000 (0:00:00.182) 0:00:08.249 ******* 2025-11-23 00:39:39.208145 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208155 | orchestrator | 2025-11-23 00:39:39.208166 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-23 00:39:39.208177 | orchestrator | Sunday 23 November 2025 00:39:33 +0000 (0:00:00.123) 0:00:08.372 ******* 2025-11-23 00:39:39.208188 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}}) 2025-11-23 00:39:39.208200 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '939e3465-cd43-5a63-a3e3-1280596736df'}}) 2025-11-23 00:39:39.208236 | orchestrator | 2025-11-23 00:39:39.208248 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-23 00:39:39.208261 | orchestrator | Sunday 23 November 2025 00:39:34 +0000 (0:00:00.175) 0:00:08.547 ******* 2025-11-23 00:39:39.208273 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}) 2025-11-23 00:39:39.208287 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'}) 2025-11-23 00:39:39.208299 | orchestrator | 2025-11-23 00:39:39.208312 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-23 00:39:39.208323 | orchestrator | Sunday 23 November 2025 00:39:35 +0000 (0:00:01.883) 0:00:10.431 ******* 2025-11-23 00:39:39.208336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.208356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.208376 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208392 | orchestrator | 2025-11-23 00:39:39.208409 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-23 00:39:39.208426 | orchestrator | Sunday 23 November 2025 00:39:36 +0000 (0:00:00.134) 0:00:10.566 ******* 2025-11-23 00:39:39.208446 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}) 2025-11-23 00:39:39.208466 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'}) 2025-11-23 00:39:39.208508 | orchestrator | 2025-11-23 00:39:39.208530 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-23 00:39:39.208549 | orchestrator | Sunday 23 November 2025 00:39:37 +0000 (0:00:01.423) 0:00:11.990 ******* 2025-11-23 00:39:39.208567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.208583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.208594 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208605 | orchestrator | 2025-11-23 00:39:39.208616 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-23 00:39:39.208627 | orchestrator | Sunday 23 November 2025 00:39:37 +0000 (0:00:00.130) 0:00:12.121 ******* 2025-11-23 00:39:39.208656 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208668 | orchestrator | 2025-11-23 00:39:39.208678 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-23 00:39:39.208689 | orchestrator | Sunday 23 November 2025 00:39:37 +0000 (0:00:00.126) 0:00:12.247 ******* 2025-11-23 00:39:39.208700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.208712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.208722 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208733 | orchestrator | 2025-11-23 00:39:39.208744 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-23 00:39:39.208755 | orchestrator | Sunday 23 November 2025 00:39:37 +0000 (0:00:00.246) 0:00:12.494 ******* 2025-11-23 00:39:39.208766 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208776 | orchestrator | 2025-11-23 00:39:39.208798 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-23 00:39:39.208809 | orchestrator | Sunday 23 November 2025 00:39:38 +0000 (0:00:00.129) 0:00:12.623 ******* 2025-11-23 00:39:39.208820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.208831 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.208842 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208853 | orchestrator | 2025-11-23 00:39:39.208864 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-23 00:39:39.208875 | orchestrator | Sunday 23 November 2025 00:39:38 +0000 (0:00:00.143) 0:00:12.767 ******* 2025-11-23 00:39:39.208886 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208896 | orchestrator | 2025-11-23 00:39:39.208908 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-23 00:39:39.208918 | orchestrator | Sunday 23 November 2025 00:39:38 +0000 (0:00:00.129) 0:00:12.896 ******* 2025-11-23 00:39:39.208929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.208940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.208951 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.208962 | orchestrator | 2025-11-23 00:39:39.208973 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-23 00:39:39.208984 | orchestrator | Sunday 23 November 2025 00:39:38 +0000 (0:00:00.140) 0:00:13.037 ******* 2025-11-23 00:39:39.209014 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:39.209025 | orchestrator | 2025-11-23 00:39:39.209036 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-23 00:39:39.209052 | orchestrator | Sunday 23 November 2025 00:39:38 +0000 (0:00:00.140) 0:00:13.177 ******* 2025-11-23 00:39:39.209063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.209074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.209085 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.209096 | orchestrator | 2025-11-23 00:39:39.209107 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-23 00:39:39.209118 | orchestrator | Sunday 23 November 2025 00:39:38 +0000 (0:00:00.142) 0:00:13.320 ******* 2025-11-23 00:39:39.209129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.209139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.209150 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.209161 | orchestrator | 2025-11-23 00:39:39.209172 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-23 00:39:39.209182 | orchestrator | Sunday 23 November 2025 00:39:38 +0000 (0:00:00.149) 0:00:13.469 ******* 2025-11-23 00:39:39.209193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:39.209204 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:39.209214 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.209232 | orchestrator | 2025-11-23 00:39:39.209243 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-23 00:39:39.209253 | orchestrator | Sunday 23 November 2025 00:39:39 +0000 (0:00:00.135) 0:00:13.605 ******* 2025-11-23 00:39:39.209264 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:39.209275 | orchestrator | 2025-11-23 00:39:39.209286 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-23 00:39:39.209303 | orchestrator | Sunday 23 November 2025 00:39:39 +0000 (0:00:00.109) 0:00:13.714 ******* 2025-11-23 00:39:44.807813 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.807953 | orchestrator | 2025-11-23 00:39:44.807982 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-23 00:39:44.808003 | orchestrator | Sunday 23 November 2025 00:39:39 +0000 (0:00:00.105) 0:00:13.819 ******* 2025-11-23 00:39:44.808023 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.808043 | orchestrator | 2025-11-23 00:39:44.808061 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-23 00:39:44.808082 | orchestrator | Sunday 23 November 2025 00:39:39 +0000 (0:00:00.130) 0:00:13.949 ******* 2025-11-23 00:39:44.808101 | orchestrator | ok: [testbed-node-3] => { 2025-11-23 00:39:44.808115 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-23 00:39:44.808222 | orchestrator | } 2025-11-23 00:39:44.808245 | orchestrator | 2025-11-23 00:39:44.808265 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-23 00:39:44.808282 | orchestrator | Sunday 23 November 2025 00:39:39 +0000 (0:00:00.229) 0:00:14.179 ******* 2025-11-23 00:39:44.808298 | orchestrator | ok: [testbed-node-3] => { 2025-11-23 00:39:44.808316 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-23 00:39:44.808334 | orchestrator | } 2025-11-23 00:39:44.808352 | orchestrator | 2025-11-23 00:39:44.808371 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-23 00:39:44.808391 | orchestrator | Sunday 23 November 2025 00:39:39 +0000 (0:00:00.116) 0:00:14.295 ******* 2025-11-23 00:39:44.808410 | orchestrator | ok: [testbed-node-3] => { 2025-11-23 00:39:44.808428 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-23 00:39:44.808446 | orchestrator | } 2025-11-23 00:39:44.808465 | orchestrator | 2025-11-23 00:39:44.808535 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-23 00:39:44.808560 | orchestrator | Sunday 23 November 2025 00:39:39 +0000 (0:00:00.117) 0:00:14.413 ******* 2025-11-23 00:39:44.808579 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:44.808600 | orchestrator | 2025-11-23 00:39:44.808618 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-23 00:39:44.808636 | orchestrator | Sunday 23 November 2025 00:39:40 +0000 (0:00:00.610) 0:00:15.024 ******* 2025-11-23 00:39:44.808653 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:44.808672 | orchestrator | 2025-11-23 00:39:44.808690 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-23 00:39:44.808708 | orchestrator | Sunday 23 November 2025 00:39:40 +0000 (0:00:00.466) 0:00:15.491 ******* 2025-11-23 00:39:44.808725 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:44.808742 | orchestrator | 2025-11-23 00:39:44.808760 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-23 00:39:44.808779 | orchestrator | Sunday 23 November 2025 00:39:41 +0000 (0:00:00.471) 0:00:15.963 ******* 2025-11-23 00:39:44.808798 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:44.808816 | orchestrator | 2025-11-23 00:39:44.808921 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-23 00:39:44.808947 | orchestrator | Sunday 23 November 2025 00:39:41 +0000 (0:00:00.148) 0:00:16.111 ******* 2025-11-23 00:39:44.808965 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.808983 | orchestrator | 2025-11-23 00:39:44.809002 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-23 00:39:44.809020 | orchestrator | Sunday 23 November 2025 00:39:41 +0000 (0:00:00.112) 0:00:16.224 ******* 2025-11-23 00:39:44.809073 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.809094 | orchestrator | 2025-11-23 00:39:44.809132 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-23 00:39:44.809153 | orchestrator | Sunday 23 November 2025 00:39:41 +0000 (0:00:00.112) 0:00:16.336 ******* 2025-11-23 00:39:44.809173 | orchestrator | ok: [testbed-node-3] => { 2025-11-23 00:39:44.809263 | orchestrator |  "vgs_report": { 2025-11-23 00:39:44.809285 | orchestrator |  "vg": [] 2025-11-23 00:39:44.809409 | orchestrator |  } 2025-11-23 00:39:44.809432 | orchestrator | } 2025-11-23 00:39:44.809451 | orchestrator | 2025-11-23 00:39:44.809468 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-23 00:39:44.809515 | orchestrator | Sunday 23 November 2025 00:39:41 +0000 (0:00:00.122) 0:00:16.459 ******* 2025-11-23 00:39:44.809536 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.809554 | orchestrator | 2025-11-23 00:39:44.809572 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-23 00:39:44.809591 | orchestrator | Sunday 23 November 2025 00:39:42 +0000 (0:00:00.136) 0:00:16.596 ******* 2025-11-23 00:39:44.809609 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.809628 | orchestrator | 2025-11-23 00:39:44.809646 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-23 00:39:44.809663 | orchestrator | Sunday 23 November 2025 00:39:42 +0000 (0:00:00.126) 0:00:16.722 ******* 2025-11-23 00:39:44.809682 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.809701 | orchestrator | 2025-11-23 00:39:44.809719 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-23 00:39:44.809739 | orchestrator | Sunday 23 November 2025 00:39:42 +0000 (0:00:00.232) 0:00:16.955 ******* 2025-11-23 00:39:44.809756 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.809776 | orchestrator | 2025-11-23 00:39:44.809795 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-23 00:39:44.809815 | orchestrator | Sunday 23 November 2025 00:39:42 +0000 (0:00:00.121) 0:00:17.077 ******* 2025-11-23 00:39:44.809834 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.809853 | orchestrator | 2025-11-23 00:39:44.809872 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-23 00:39:44.809892 | orchestrator | Sunday 23 November 2025 00:39:42 +0000 (0:00:00.120) 0:00:17.197 ******* 2025-11-23 00:39:44.809911 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.809929 | orchestrator | 2025-11-23 00:39:44.809947 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-23 00:39:44.809966 | orchestrator | Sunday 23 November 2025 00:39:42 +0000 (0:00:00.138) 0:00:17.336 ******* 2025-11-23 00:39:44.809985 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810004 | orchestrator | 2025-11-23 00:39:44.810095 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-23 00:39:44.810109 | orchestrator | Sunday 23 November 2025 00:39:42 +0000 (0:00:00.120) 0:00:17.456 ******* 2025-11-23 00:39:44.810185 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810198 | orchestrator | 2025-11-23 00:39:44.810209 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-23 00:39:44.810220 | orchestrator | Sunday 23 November 2025 00:39:43 +0000 (0:00:00.113) 0:00:17.570 ******* 2025-11-23 00:39:44.810231 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810241 | orchestrator | 2025-11-23 00:39:44.810252 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-23 00:39:44.810263 | orchestrator | Sunday 23 November 2025 00:39:43 +0000 (0:00:00.128) 0:00:17.699 ******* 2025-11-23 00:39:44.810273 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810284 | orchestrator | 2025-11-23 00:39:44.810294 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-23 00:39:44.810305 | orchestrator | Sunday 23 November 2025 00:39:43 +0000 (0:00:00.138) 0:00:17.838 ******* 2025-11-23 00:39:44.810315 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810326 | orchestrator | 2025-11-23 00:39:44.810355 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-23 00:39:44.810366 | orchestrator | Sunday 23 November 2025 00:39:43 +0000 (0:00:00.123) 0:00:17.961 ******* 2025-11-23 00:39:44.810377 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810388 | orchestrator | 2025-11-23 00:39:44.810399 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-23 00:39:44.810409 | orchestrator | Sunday 23 November 2025 00:39:43 +0000 (0:00:00.125) 0:00:18.086 ******* 2025-11-23 00:39:44.810420 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810430 | orchestrator | 2025-11-23 00:39:44.810441 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-23 00:39:44.810452 | orchestrator | Sunday 23 November 2025 00:39:43 +0000 (0:00:00.132) 0:00:18.219 ******* 2025-11-23 00:39:44.810463 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810473 | orchestrator | 2025-11-23 00:39:44.810623 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-23 00:39:44.810684 | orchestrator | Sunday 23 November 2025 00:39:43 +0000 (0:00:00.136) 0:00:18.355 ******* 2025-11-23 00:39:44.810705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:44.810725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:44.810736 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810748 | orchestrator | 2025-11-23 00:39:44.810759 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-23 00:39:44.810770 | orchestrator | Sunday 23 November 2025 00:39:44 +0000 (0:00:00.246) 0:00:18.602 ******* 2025-11-23 00:39:44.810781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:44.810792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:44.810803 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810814 | orchestrator | 2025-11-23 00:39:44.810826 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-23 00:39:44.810837 | orchestrator | Sunday 23 November 2025 00:39:44 +0000 (0:00:00.145) 0:00:18.747 ******* 2025-11-23 00:39:44.810848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:44.810859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:44.810869 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810880 | orchestrator | 2025-11-23 00:39:44.810891 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-23 00:39:44.810902 | orchestrator | Sunday 23 November 2025 00:39:44 +0000 (0:00:00.141) 0:00:18.889 ******* 2025-11-23 00:39:44.810913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:44.810924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:44.810935 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.810945 | orchestrator | 2025-11-23 00:39:44.810956 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-23 00:39:44.810967 | orchestrator | Sunday 23 November 2025 00:39:44 +0000 (0:00:00.141) 0:00:19.030 ******* 2025-11-23 00:39:44.810978 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:44.811016 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:44.811028 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:44.811039 | orchestrator | 2025-11-23 00:39:44.811048 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-23 00:39:44.811058 | orchestrator | Sunday 23 November 2025 00:39:44 +0000 (0:00:00.138) 0:00:19.169 ******* 2025-11-23 00:39:44.811082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:49.608483 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:49.608660 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:49.608679 | orchestrator | 2025-11-23 00:39:49.608693 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-23 00:39:49.609526 | orchestrator | Sunday 23 November 2025 00:39:44 +0000 (0:00:00.147) 0:00:19.317 ******* 2025-11-23 00:39:49.609554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:49.609567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:49.609578 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:49.609589 | orchestrator | 2025-11-23 00:39:49.609600 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-23 00:39:49.609612 | orchestrator | Sunday 23 November 2025 00:39:44 +0000 (0:00:00.143) 0:00:19.460 ******* 2025-11-23 00:39:49.609623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:49.609634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:49.609644 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:49.609655 | orchestrator | 2025-11-23 00:39:49.609665 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-23 00:39:49.609690 | orchestrator | Sunday 23 November 2025 00:39:45 +0000 (0:00:00.145) 0:00:19.606 ******* 2025-11-23 00:39:49.609712 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:49.609724 | orchestrator | 2025-11-23 00:39:49.609735 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-23 00:39:49.609745 | orchestrator | Sunday 23 November 2025 00:39:45 +0000 (0:00:00.549) 0:00:20.155 ******* 2025-11-23 00:39:49.609756 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:49.609766 | orchestrator | 2025-11-23 00:39:49.609777 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-23 00:39:49.609788 | orchestrator | Sunday 23 November 2025 00:39:46 +0000 (0:00:00.522) 0:00:20.677 ******* 2025-11-23 00:39:49.609798 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:39:49.609809 | orchestrator | 2025-11-23 00:39:49.609819 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-23 00:39:49.609830 | orchestrator | Sunday 23 November 2025 00:39:46 +0000 (0:00:00.118) 0:00:20.795 ******* 2025-11-23 00:39:49.609859 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'vg_name': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'}) 2025-11-23 00:39:49.609872 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'vg_name': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}) 2025-11-23 00:39:49.609883 | orchestrator | 2025-11-23 00:39:49.609893 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-23 00:39:49.609929 | orchestrator | Sunday 23 November 2025 00:39:46 +0000 (0:00:00.151) 0:00:20.946 ******* 2025-11-23 00:39:49.609940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:49.609951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:49.609962 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:49.609972 | orchestrator | 2025-11-23 00:39:49.609983 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-23 00:39:49.609993 | orchestrator | Sunday 23 November 2025 00:39:46 +0000 (0:00:00.334) 0:00:21.280 ******* 2025-11-23 00:39:49.610004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:49.610069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:49.610083 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:49.610093 | orchestrator | 2025-11-23 00:39:49.610104 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-23 00:39:49.610115 | orchestrator | Sunday 23 November 2025 00:39:46 +0000 (0:00:00.151) 0:00:21.432 ******* 2025-11-23 00:39:49.610125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'})  2025-11-23 00:39:49.610136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'})  2025-11-23 00:39:49.610147 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:39:49.610157 | orchestrator | 2025-11-23 00:39:49.610168 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-23 00:39:49.610179 | orchestrator | Sunday 23 November 2025 00:39:47 +0000 (0:00:00.134) 0:00:21.566 ******* 2025-11-23 00:39:49.610210 | orchestrator | ok: [testbed-node-3] => { 2025-11-23 00:39:49.610222 | orchestrator |  "lvm_report": { 2025-11-23 00:39:49.610234 | orchestrator |  "lv": [ 2025-11-23 00:39:49.610245 | orchestrator |  { 2025-11-23 00:39:49.610256 | orchestrator |  "lv_name": "osd-block-939e3465-cd43-5a63-a3e3-1280596736df", 2025-11-23 00:39:49.610267 | orchestrator |  "vg_name": "ceph-939e3465-cd43-5a63-a3e3-1280596736df" 2025-11-23 00:39:49.610278 | orchestrator |  }, 2025-11-23 00:39:49.610288 | orchestrator |  { 2025-11-23 00:39:49.610299 | orchestrator |  "lv_name": "osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6", 2025-11-23 00:39:49.610309 | orchestrator |  "vg_name": "ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6" 2025-11-23 00:39:49.610320 | orchestrator |  } 2025-11-23 00:39:49.610330 | orchestrator |  ], 2025-11-23 00:39:49.610341 | orchestrator |  "pv": [ 2025-11-23 00:39:49.610351 | orchestrator |  { 2025-11-23 00:39:49.610362 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-23 00:39:49.610373 | orchestrator |  "vg_name": "ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6" 2025-11-23 00:39:49.610383 | orchestrator |  }, 2025-11-23 00:39:49.610394 | orchestrator |  { 2025-11-23 00:39:49.610404 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-23 00:39:49.610415 | orchestrator |  "vg_name": "ceph-939e3465-cd43-5a63-a3e3-1280596736df" 2025-11-23 00:39:49.610425 | orchestrator |  } 2025-11-23 00:39:49.610436 | orchestrator |  ] 2025-11-23 00:39:49.610447 | orchestrator |  } 2025-11-23 00:39:49.610458 | orchestrator | } 2025-11-23 00:39:49.610468 | orchestrator | 2025-11-23 00:39:49.610479 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-23 00:39:49.610601 | orchestrator | 2025-11-23 00:39:49.610615 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-23 00:39:49.610626 | orchestrator | Sunday 23 November 2025 00:39:47 +0000 (0:00:00.244) 0:00:21.811 ******* 2025-11-23 00:39:49.610636 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-23 00:39:49.610647 | orchestrator | 2025-11-23 00:39:49.610658 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-23 00:39:49.610668 | orchestrator | Sunday 23 November 2025 00:39:47 +0000 (0:00:00.258) 0:00:22.070 ******* 2025-11-23 00:39:49.610679 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:39:49.610689 | orchestrator | 2025-11-23 00:39:49.610700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:49.610711 | orchestrator | Sunday 23 November 2025 00:39:47 +0000 (0:00:00.236) 0:00:22.306 ******* 2025-11-23 00:39:49.610721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-23 00:39:49.610731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-23 00:39:49.610742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-23 00:39:49.610753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-23 00:39:49.610763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-23 00:39:49.610781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-23 00:39:49.610793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-23 00:39:49.610803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-23 00:39:49.610814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-23 00:39:49.610824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-23 00:39:49.610835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-23 00:39:49.610845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-23 00:39:49.610856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-23 00:39:49.610866 | orchestrator | 2025-11-23 00:39:49.610877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:49.610887 | orchestrator | Sunday 23 November 2025 00:39:48 +0000 (0:00:00.374) 0:00:22.681 ******* 2025-11-23 00:39:49.610898 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:49.610908 | orchestrator | 2025-11-23 00:39:49.610919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:49.610929 | orchestrator | Sunday 23 November 2025 00:39:48 +0000 (0:00:00.177) 0:00:22.858 ******* 2025-11-23 00:39:49.610940 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:49.610950 | orchestrator | 2025-11-23 00:39:49.610961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:49.610971 | orchestrator | Sunday 23 November 2025 00:39:48 +0000 (0:00:00.194) 0:00:23.053 ******* 2025-11-23 00:39:49.610982 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:49.610992 | orchestrator | 2025-11-23 00:39:49.611003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:49.611013 | orchestrator | Sunday 23 November 2025 00:39:49 +0000 (0:00:00.502) 0:00:23.555 ******* 2025-11-23 00:39:49.611024 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:49.611034 | orchestrator | 2025-11-23 00:39:49.611045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:49.611055 | orchestrator | Sunday 23 November 2025 00:39:49 +0000 (0:00:00.185) 0:00:23.741 ******* 2025-11-23 00:39:49.611065 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:49.611076 | orchestrator | 2025-11-23 00:39:49.611096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:49.611107 | orchestrator | Sunday 23 November 2025 00:39:49 +0000 (0:00:00.182) 0:00:23.923 ******* 2025-11-23 00:39:49.611118 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:49.611129 | orchestrator | 2025-11-23 00:39:49.611148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:59.986448 | orchestrator | Sunday 23 November 2025 00:39:49 +0000 (0:00:00.194) 0:00:24.118 ******* 2025-11-23 00:39:59.986584 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.986603 | orchestrator | 2025-11-23 00:39:59.986616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:59.986627 | orchestrator | Sunday 23 November 2025 00:39:49 +0000 (0:00:00.185) 0:00:24.303 ******* 2025-11-23 00:39:59.986639 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.986650 | orchestrator | 2025-11-23 00:39:59.986660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:59.986672 | orchestrator | Sunday 23 November 2025 00:39:50 +0000 (0:00:00.220) 0:00:24.524 ******* 2025-11-23 00:39:59.986682 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78) 2025-11-23 00:39:59.986693 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78) 2025-11-23 00:39:59.986748 | orchestrator | 2025-11-23 00:39:59.986759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:59.986770 | orchestrator | Sunday 23 November 2025 00:39:50 +0000 (0:00:00.461) 0:00:24.985 ******* 2025-11-23 00:39:59.986781 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1) 2025-11-23 00:39:59.986792 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1) 2025-11-23 00:39:59.986803 | orchestrator | 2025-11-23 00:39:59.986814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:59.986825 | orchestrator | Sunday 23 November 2025 00:39:50 +0000 (0:00:00.423) 0:00:25.409 ******* 2025-11-23 00:39:59.986836 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4) 2025-11-23 00:39:59.986847 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4) 2025-11-23 00:39:59.986858 | orchestrator | 2025-11-23 00:39:59.986869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:59.986880 | orchestrator | Sunday 23 November 2025 00:39:51 +0000 (0:00:00.403) 0:00:25.812 ******* 2025-11-23 00:39:59.986891 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f) 2025-11-23 00:39:59.986902 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f) 2025-11-23 00:39:59.986913 | orchestrator | 2025-11-23 00:39:59.986924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:39:59.986935 | orchestrator | Sunday 23 November 2025 00:39:51 +0000 (0:00:00.529) 0:00:26.341 ******* 2025-11-23 00:39:59.986946 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-23 00:39:59.986957 | orchestrator | 2025-11-23 00:39:59.986967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.986979 | orchestrator | Sunday 23 November 2025 00:39:52 +0000 (0:00:00.444) 0:00:26.786 ******* 2025-11-23 00:39:59.986989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-23 00:39:59.987001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-23 00:39:59.987012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-23 00:39:59.987039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-23 00:39:59.987071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-23 00:39:59.987082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-23 00:39:59.987093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-23 00:39:59.987104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-23 00:39:59.987114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-23 00:39:59.987125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-23 00:39:59.987136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-23 00:39:59.987146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-23 00:39:59.987157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-23 00:39:59.987168 | orchestrator | 2025-11-23 00:39:59.987179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987189 | orchestrator | Sunday 23 November 2025 00:39:52 +0000 (0:00:00.677) 0:00:27.464 ******* 2025-11-23 00:39:59.987200 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987212 | orchestrator | 2025-11-23 00:39:59.987222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987233 | orchestrator | Sunday 23 November 2025 00:39:53 +0000 (0:00:00.200) 0:00:27.665 ******* 2025-11-23 00:39:59.987244 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987255 | orchestrator | 2025-11-23 00:39:59.987265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987276 | orchestrator | Sunday 23 November 2025 00:39:53 +0000 (0:00:00.190) 0:00:27.855 ******* 2025-11-23 00:39:59.987287 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987298 | orchestrator | 2025-11-23 00:39:59.987325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987337 | orchestrator | Sunday 23 November 2025 00:39:53 +0000 (0:00:00.180) 0:00:28.035 ******* 2025-11-23 00:39:59.987347 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987358 | orchestrator | 2025-11-23 00:39:59.987369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987380 | orchestrator | Sunday 23 November 2025 00:39:53 +0000 (0:00:00.219) 0:00:28.254 ******* 2025-11-23 00:39:59.987390 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987401 | orchestrator | 2025-11-23 00:39:59.987412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987423 | orchestrator | Sunday 23 November 2025 00:39:53 +0000 (0:00:00.215) 0:00:28.469 ******* 2025-11-23 00:39:59.987434 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987444 | orchestrator | 2025-11-23 00:39:59.987455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987466 | orchestrator | Sunday 23 November 2025 00:39:54 +0000 (0:00:00.203) 0:00:28.673 ******* 2025-11-23 00:39:59.987477 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987488 | orchestrator | 2025-11-23 00:39:59.987563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987576 | orchestrator | Sunday 23 November 2025 00:39:54 +0000 (0:00:00.245) 0:00:28.918 ******* 2025-11-23 00:39:59.987586 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987597 | orchestrator | 2025-11-23 00:39:59.987608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987618 | orchestrator | Sunday 23 November 2025 00:39:54 +0000 (0:00:00.196) 0:00:29.115 ******* 2025-11-23 00:39:59.987629 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-23 00:39:59.987640 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-23 00:39:59.987651 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-23 00:39:59.987670 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-23 00:39:59.987681 | orchestrator | 2025-11-23 00:39:59.987692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987703 | orchestrator | Sunday 23 November 2025 00:39:55 +0000 (0:00:00.759) 0:00:29.875 ******* 2025-11-23 00:39:59.987713 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987724 | orchestrator | 2025-11-23 00:39:59.987735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987745 | orchestrator | Sunday 23 November 2025 00:39:55 +0000 (0:00:00.239) 0:00:30.114 ******* 2025-11-23 00:39:59.987756 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987766 | orchestrator | 2025-11-23 00:39:59.987777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987788 | orchestrator | Sunday 23 November 2025 00:39:56 +0000 (0:00:00.498) 0:00:30.613 ******* 2025-11-23 00:39:59.987798 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987809 | orchestrator | 2025-11-23 00:39:59.987819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:39:59.987830 | orchestrator | Sunday 23 November 2025 00:39:56 +0000 (0:00:00.192) 0:00:30.806 ******* 2025-11-23 00:39:59.987847 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987858 | orchestrator | 2025-11-23 00:39:59.987869 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-23 00:39:59.987879 | orchestrator | Sunday 23 November 2025 00:39:56 +0000 (0:00:00.212) 0:00:31.019 ******* 2025-11-23 00:39:59.987890 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.987900 | orchestrator | 2025-11-23 00:39:59.987911 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-23 00:39:59.987921 | orchestrator | Sunday 23 November 2025 00:39:56 +0000 (0:00:00.127) 0:00:31.146 ******* 2025-11-23 00:39:59.987932 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c884493c-7b6c-5149-8c24-d999b26a8d07'}}) 2025-11-23 00:39:59.987943 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1076031f-9245-50d5-902f-2c37ef490a74'}}) 2025-11-23 00:39:59.987954 | orchestrator | 2025-11-23 00:39:59.987964 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-23 00:39:59.987975 | orchestrator | Sunday 23 November 2025 00:39:56 +0000 (0:00:00.158) 0:00:31.304 ******* 2025-11-23 00:39:59.987986 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'}) 2025-11-23 00:39:59.987998 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'}) 2025-11-23 00:39:59.988009 | orchestrator | 2025-11-23 00:39:59.988020 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-23 00:39:59.988030 | orchestrator | Sunday 23 November 2025 00:39:58 +0000 (0:00:01.766) 0:00:33.071 ******* 2025-11-23 00:39:59.988041 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:39:59.988052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:39:59.988063 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:39:59.988073 | orchestrator | 2025-11-23 00:39:59.988084 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-23 00:39:59.988095 | orchestrator | Sunday 23 November 2025 00:39:58 +0000 (0:00:00.123) 0:00:33.194 ******* 2025-11-23 00:39:59.988105 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'}) 2025-11-23 00:39:59.988124 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'}) 2025-11-23 00:40:04.868028 | orchestrator | 2025-11-23 00:40:04.868121 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-23 00:40:04.868134 | orchestrator | Sunday 23 November 2025 00:39:59 +0000 (0:00:01.296) 0:00:34.491 ******* 2025-11-23 00:40:04.868143 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:04.868152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:04.868161 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868170 | orchestrator | 2025-11-23 00:40:04.868178 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-23 00:40:04.868186 | orchestrator | Sunday 23 November 2025 00:40:00 +0000 (0:00:00.149) 0:00:34.640 ******* 2025-11-23 00:40:04.868194 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868202 | orchestrator | 2025-11-23 00:40:04.868210 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-23 00:40:04.868218 | orchestrator | Sunday 23 November 2025 00:40:00 +0000 (0:00:00.131) 0:00:34.772 ******* 2025-11-23 00:40:04.868226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:04.868234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:04.868242 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868250 | orchestrator | 2025-11-23 00:40:04.868258 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-23 00:40:04.868266 | orchestrator | Sunday 23 November 2025 00:40:00 +0000 (0:00:00.138) 0:00:34.910 ******* 2025-11-23 00:40:04.868273 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868281 | orchestrator | 2025-11-23 00:40:04.868289 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-23 00:40:04.868297 | orchestrator | Sunday 23 November 2025 00:40:00 +0000 (0:00:00.126) 0:00:35.037 ******* 2025-11-23 00:40:04.868305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:04.868313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:04.868336 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868345 | orchestrator | 2025-11-23 00:40:04.868353 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-23 00:40:04.868361 | orchestrator | Sunday 23 November 2025 00:40:00 +0000 (0:00:00.285) 0:00:35.322 ******* 2025-11-23 00:40:04.868369 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868377 | orchestrator | 2025-11-23 00:40:04.868385 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-23 00:40:04.868393 | orchestrator | Sunday 23 November 2025 00:40:00 +0000 (0:00:00.126) 0:00:35.449 ******* 2025-11-23 00:40:04.868401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:04.868409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:04.868417 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868425 | orchestrator | 2025-11-23 00:40:04.868433 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-23 00:40:04.868441 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.134) 0:00:35.583 ******* 2025-11-23 00:40:04.868469 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:04.868479 | orchestrator | 2025-11-23 00:40:04.868487 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-23 00:40:04.868495 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.128) 0:00:35.712 ******* 2025-11-23 00:40:04.868562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:04.868571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:04.868581 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868590 | orchestrator | 2025-11-23 00:40:04.868600 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-23 00:40:04.868609 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.134) 0:00:35.846 ******* 2025-11-23 00:40:04.868619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:04.868628 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:04.868637 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868646 | orchestrator | 2025-11-23 00:40:04.868656 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-23 00:40:04.868680 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.136) 0:00:35.983 ******* 2025-11-23 00:40:04.868691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:04.868701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:04.868711 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868721 | orchestrator | 2025-11-23 00:40:04.868731 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-23 00:40:04.868740 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.139) 0:00:36.122 ******* 2025-11-23 00:40:04.868750 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868760 | orchestrator | 2025-11-23 00:40:04.868770 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-23 00:40:04.868779 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.111) 0:00:36.234 ******* 2025-11-23 00:40:04.868790 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868800 | orchestrator | 2025-11-23 00:40:04.868810 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-23 00:40:04.868820 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.111) 0:00:36.345 ******* 2025-11-23 00:40:04.868829 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.868838 | orchestrator | 2025-11-23 00:40:04.868846 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-23 00:40:04.868855 | orchestrator | Sunday 23 November 2025 00:40:01 +0000 (0:00:00.109) 0:00:36.455 ******* 2025-11-23 00:40:04.868864 | orchestrator | ok: [testbed-node-4] => { 2025-11-23 00:40:04.868873 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-23 00:40:04.868882 | orchestrator | } 2025-11-23 00:40:04.868890 | orchestrator | 2025-11-23 00:40:04.868899 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-23 00:40:04.868908 | orchestrator | Sunday 23 November 2025 00:40:02 +0000 (0:00:00.155) 0:00:36.611 ******* 2025-11-23 00:40:04.868916 | orchestrator | ok: [testbed-node-4] => { 2025-11-23 00:40:04.868925 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-23 00:40:04.868933 | orchestrator | } 2025-11-23 00:40:04.868942 | orchestrator | 2025-11-23 00:40:04.868958 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-23 00:40:04.868967 | orchestrator | Sunday 23 November 2025 00:40:02 +0000 (0:00:00.126) 0:00:36.738 ******* 2025-11-23 00:40:04.868975 | orchestrator | ok: [testbed-node-4] => { 2025-11-23 00:40:04.868984 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-23 00:40:04.868993 | orchestrator | } 2025-11-23 00:40:04.869001 | orchestrator | 2025-11-23 00:40:04.869010 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-23 00:40:04.869018 | orchestrator | Sunday 23 November 2025 00:40:02 +0000 (0:00:00.257) 0:00:36.995 ******* 2025-11-23 00:40:04.869027 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:04.869036 | orchestrator | 2025-11-23 00:40:04.869045 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-23 00:40:04.869053 | orchestrator | Sunday 23 November 2025 00:40:02 +0000 (0:00:00.513) 0:00:37.509 ******* 2025-11-23 00:40:04.869062 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:04.869071 | orchestrator | 2025-11-23 00:40:04.869080 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-23 00:40:04.869088 | orchestrator | Sunday 23 November 2025 00:40:03 +0000 (0:00:00.475) 0:00:37.985 ******* 2025-11-23 00:40:04.869097 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:04.869105 | orchestrator | 2025-11-23 00:40:04.869114 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-23 00:40:04.869123 | orchestrator | Sunday 23 November 2025 00:40:03 +0000 (0:00:00.477) 0:00:38.462 ******* 2025-11-23 00:40:04.869131 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:04.869140 | orchestrator | 2025-11-23 00:40:04.869155 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-23 00:40:04.869163 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.131) 0:00:38.594 ******* 2025-11-23 00:40:04.869171 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.869180 | orchestrator | 2025-11-23 00:40:04.869188 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-23 00:40:04.869197 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.089) 0:00:38.683 ******* 2025-11-23 00:40:04.869205 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.869214 | orchestrator | 2025-11-23 00:40:04.869222 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-23 00:40:04.869231 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.087) 0:00:38.771 ******* 2025-11-23 00:40:04.869239 | orchestrator | ok: [testbed-node-4] => { 2025-11-23 00:40:04.869248 | orchestrator |  "vgs_report": { 2025-11-23 00:40:04.869257 | orchestrator |  "vg": [] 2025-11-23 00:40:04.869266 | orchestrator |  } 2025-11-23 00:40:04.869275 | orchestrator | } 2025-11-23 00:40:04.869284 | orchestrator | 2025-11-23 00:40:04.869292 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-23 00:40:04.869301 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.112) 0:00:38.883 ******* 2025-11-23 00:40:04.869309 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.869318 | orchestrator | 2025-11-23 00:40:04.869326 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-23 00:40:04.869335 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.106) 0:00:38.990 ******* 2025-11-23 00:40:04.869343 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.869352 | orchestrator | 2025-11-23 00:40:04.869360 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-23 00:40:04.869369 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.133) 0:00:39.123 ******* 2025-11-23 00:40:04.869377 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.869386 | orchestrator | 2025-11-23 00:40:04.869394 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-23 00:40:04.869403 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.124) 0:00:39.248 ******* 2025-11-23 00:40:04.869412 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:04.869420 | orchestrator | 2025-11-23 00:40:04.869440 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-23 00:40:09.079603 | orchestrator | Sunday 23 November 2025 00:40:04 +0000 (0:00:00.124) 0:00:39.372 ******* 2025-11-23 00:40:09.079719 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.079737 | orchestrator | 2025-11-23 00:40:09.079750 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-23 00:40:09.079762 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.250) 0:00:39.623 ******* 2025-11-23 00:40:09.079773 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.079784 | orchestrator | 2025-11-23 00:40:09.079796 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-23 00:40:09.079807 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.124) 0:00:39.747 ******* 2025-11-23 00:40:09.079818 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.079828 | orchestrator | 2025-11-23 00:40:09.079839 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-23 00:40:09.079850 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.120) 0:00:39.868 ******* 2025-11-23 00:40:09.079861 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.079872 | orchestrator | 2025-11-23 00:40:09.079883 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-23 00:40:09.079894 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.134) 0:00:40.003 ******* 2025-11-23 00:40:09.079905 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.079916 | orchestrator | 2025-11-23 00:40:09.079927 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-23 00:40:09.079938 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.125) 0:00:40.128 ******* 2025-11-23 00:40:09.079949 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.079960 | orchestrator | 2025-11-23 00:40:09.079971 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-23 00:40:09.079982 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.122) 0:00:40.251 ******* 2025-11-23 00:40:09.079993 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080004 | orchestrator | 2025-11-23 00:40:09.080014 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-23 00:40:09.080025 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.125) 0:00:40.377 ******* 2025-11-23 00:40:09.080036 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080047 | orchestrator | 2025-11-23 00:40:09.080060 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-23 00:40:09.080073 | orchestrator | Sunday 23 November 2025 00:40:05 +0000 (0:00:00.134) 0:00:40.511 ******* 2025-11-23 00:40:09.080086 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080099 | orchestrator | 2025-11-23 00:40:09.080112 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-23 00:40:09.080125 | orchestrator | Sunday 23 November 2025 00:40:06 +0000 (0:00:00.118) 0:00:40.629 ******* 2025-11-23 00:40:09.080157 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080171 | orchestrator | 2025-11-23 00:40:09.080184 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-23 00:40:09.080211 | orchestrator | Sunday 23 November 2025 00:40:06 +0000 (0:00:00.128) 0:00:40.758 ******* 2025-11-23 00:40:09.080225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080252 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080263 | orchestrator | 2025-11-23 00:40:09.080274 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-23 00:40:09.080285 | orchestrator | Sunday 23 November 2025 00:40:06 +0000 (0:00:00.136) 0:00:40.895 ******* 2025-11-23 00:40:09.080320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080343 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080353 | orchestrator | 2025-11-23 00:40:09.080364 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-23 00:40:09.080375 | orchestrator | Sunday 23 November 2025 00:40:06 +0000 (0:00:00.138) 0:00:41.033 ******* 2025-11-23 00:40:09.080386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080408 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080418 | orchestrator | 2025-11-23 00:40:09.080429 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-23 00:40:09.080440 | orchestrator | Sunday 23 November 2025 00:40:06 +0000 (0:00:00.264) 0:00:41.297 ******* 2025-11-23 00:40:09.080451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080473 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080484 | orchestrator | 2025-11-23 00:40:09.080542 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-23 00:40:09.080557 | orchestrator | Sunday 23 November 2025 00:40:06 +0000 (0:00:00.141) 0:00:41.439 ******* 2025-11-23 00:40:09.080568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080590 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080601 | orchestrator | 2025-11-23 00:40:09.080612 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-23 00:40:09.080624 | orchestrator | Sunday 23 November 2025 00:40:07 +0000 (0:00:00.145) 0:00:41.585 ******* 2025-11-23 00:40:09.080635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080657 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080668 | orchestrator | 2025-11-23 00:40:09.080679 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-23 00:40:09.080691 | orchestrator | Sunday 23 November 2025 00:40:07 +0000 (0:00:00.155) 0:00:41.740 ******* 2025-11-23 00:40:09.080702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080724 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080735 | orchestrator | 2025-11-23 00:40:09.080746 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-23 00:40:09.080765 | orchestrator | Sunday 23 November 2025 00:40:07 +0000 (0:00:00.134) 0:00:41.875 ******* 2025-11-23 00:40:09.080776 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.080793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.080805 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.080816 | orchestrator | 2025-11-23 00:40:09.080827 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-23 00:40:09.080838 | orchestrator | Sunday 23 November 2025 00:40:07 +0000 (0:00:00.162) 0:00:42.037 ******* 2025-11-23 00:40:09.080849 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:09.080860 | orchestrator | 2025-11-23 00:40:09.080871 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-23 00:40:09.080882 | orchestrator | Sunday 23 November 2025 00:40:08 +0000 (0:00:00.498) 0:00:42.536 ******* 2025-11-23 00:40:09.080893 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:09.080904 | orchestrator | 2025-11-23 00:40:09.080915 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-23 00:40:09.080926 | orchestrator | Sunday 23 November 2025 00:40:08 +0000 (0:00:00.486) 0:00:43.022 ******* 2025-11-23 00:40:09.080937 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:09.080948 | orchestrator | 2025-11-23 00:40:09.080959 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-23 00:40:09.080971 | orchestrator | Sunday 23 November 2025 00:40:08 +0000 (0:00:00.147) 0:00:43.169 ******* 2025-11-23 00:40:09.080982 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'vg_name': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'}) 2025-11-23 00:40:09.080994 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'vg_name': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'}) 2025-11-23 00:40:09.081005 | orchestrator | 2025-11-23 00:40:09.081016 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-23 00:40:09.081026 | orchestrator | Sunday 23 November 2025 00:40:08 +0000 (0:00:00.153) 0:00:43.322 ******* 2025-11-23 00:40:09.081037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.081049 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:09.081060 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:09.081071 | orchestrator | 2025-11-23 00:40:09.081082 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-23 00:40:09.081093 | orchestrator | Sunday 23 November 2025 00:40:08 +0000 (0:00:00.135) 0:00:43.458 ******* 2025-11-23 00:40:09.081104 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:09.081122 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:14.520908 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:14.521024 | orchestrator | 2025-11-23 00:40:14.521052 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-23 00:40:14.521074 | orchestrator | Sunday 23 November 2025 00:40:09 +0000 (0:00:00.130) 0:00:43.588 ******* 2025-11-23 00:40:14.521093 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'})  2025-11-23 00:40:14.521115 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'})  2025-11-23 00:40:14.521161 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:14.521209 | orchestrator | 2025-11-23 00:40:14.521230 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-23 00:40:14.521250 | orchestrator | Sunday 23 November 2025 00:40:09 +0000 (0:00:00.148) 0:00:43.737 ******* 2025-11-23 00:40:14.521267 | orchestrator | ok: [testbed-node-4] => { 2025-11-23 00:40:14.521284 | orchestrator |  "lvm_report": { 2025-11-23 00:40:14.521338 | orchestrator |  "lv": [ 2025-11-23 00:40:14.521357 | orchestrator |  { 2025-11-23 00:40:14.521375 | orchestrator |  "lv_name": "osd-block-1076031f-9245-50d5-902f-2c37ef490a74", 2025-11-23 00:40:14.521394 | orchestrator |  "vg_name": "ceph-1076031f-9245-50d5-902f-2c37ef490a74" 2025-11-23 00:40:14.521412 | orchestrator |  }, 2025-11-23 00:40:14.521431 | orchestrator |  { 2025-11-23 00:40:14.521451 | orchestrator |  "lv_name": "osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07", 2025-11-23 00:40:14.521469 | orchestrator |  "vg_name": "ceph-c884493c-7b6c-5149-8c24-d999b26a8d07" 2025-11-23 00:40:14.521487 | orchestrator |  } 2025-11-23 00:40:14.521528 | orchestrator |  ], 2025-11-23 00:40:14.521549 | orchestrator |  "pv": [ 2025-11-23 00:40:14.521569 | orchestrator |  { 2025-11-23 00:40:14.521587 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-23 00:40:14.521606 | orchestrator |  "vg_name": "ceph-c884493c-7b6c-5149-8c24-d999b26a8d07" 2025-11-23 00:40:14.521626 | orchestrator |  }, 2025-11-23 00:40:14.521644 | orchestrator |  { 2025-11-23 00:40:14.521663 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-23 00:40:14.521681 | orchestrator |  "vg_name": "ceph-1076031f-9245-50d5-902f-2c37ef490a74" 2025-11-23 00:40:14.521701 | orchestrator |  } 2025-11-23 00:40:14.521718 | orchestrator |  ] 2025-11-23 00:40:14.521737 | orchestrator |  } 2025-11-23 00:40:14.521756 | orchestrator | } 2025-11-23 00:40:14.521775 | orchestrator | 2025-11-23 00:40:14.521794 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-23 00:40:14.521811 | orchestrator | 2025-11-23 00:40:14.521831 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-23 00:40:14.521851 | orchestrator | Sunday 23 November 2025 00:40:09 +0000 (0:00:00.393) 0:00:44.130 ******* 2025-11-23 00:40:14.521869 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-23 00:40:14.521888 | orchestrator | 2025-11-23 00:40:14.521907 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-23 00:40:14.521927 | orchestrator | Sunday 23 November 2025 00:40:09 +0000 (0:00:00.224) 0:00:44.354 ******* 2025-11-23 00:40:14.521946 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:14.521966 | orchestrator | 2025-11-23 00:40:14.521984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.522003 | orchestrator | Sunday 23 November 2025 00:40:10 +0000 (0:00:00.217) 0:00:44.572 ******* 2025-11-23 00:40:14.522126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-23 00:40:14.522149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-23 00:40:14.522166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-23 00:40:14.522184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-23 00:40:14.522202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-23 00:40:14.522222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-23 00:40:14.522241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-23 00:40:14.522259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-23 00:40:14.522294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-23 00:40:14.522314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-23 00:40:14.522332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-23 00:40:14.522351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-23 00:40:14.522370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-23 00:40:14.522393 | orchestrator | 2025-11-23 00:40:14.522412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.522431 | orchestrator | Sunday 23 November 2025 00:40:10 +0000 (0:00:00.390) 0:00:44.963 ******* 2025-11-23 00:40:14.522450 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.522469 | orchestrator | 2025-11-23 00:40:14.522488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.522580 | orchestrator | Sunday 23 November 2025 00:40:10 +0000 (0:00:00.228) 0:00:45.192 ******* 2025-11-23 00:40:14.522605 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.522624 | orchestrator | 2025-11-23 00:40:14.522641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.522681 | orchestrator | Sunday 23 November 2025 00:40:10 +0000 (0:00:00.178) 0:00:45.370 ******* 2025-11-23 00:40:14.522700 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.522717 | orchestrator | 2025-11-23 00:40:14.522734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.522811 | orchestrator | Sunday 23 November 2025 00:40:11 +0000 (0:00:00.185) 0:00:45.556 ******* 2025-11-23 00:40:14.522833 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.522850 | orchestrator | 2025-11-23 00:40:14.522869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.522886 | orchestrator | Sunday 23 November 2025 00:40:11 +0000 (0:00:00.188) 0:00:45.744 ******* 2025-11-23 00:40:14.522903 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.522921 | orchestrator | 2025-11-23 00:40:14.522938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.522955 | orchestrator | Sunday 23 November 2025 00:40:11 +0000 (0:00:00.504) 0:00:46.249 ******* 2025-11-23 00:40:14.522973 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.522991 | orchestrator | 2025-11-23 00:40:14.523009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.523027 | orchestrator | Sunday 23 November 2025 00:40:11 +0000 (0:00:00.177) 0:00:46.426 ******* 2025-11-23 00:40:14.523045 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.523062 | orchestrator | 2025-11-23 00:40:14.523080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.523097 | orchestrator | Sunday 23 November 2025 00:40:12 +0000 (0:00:00.193) 0:00:46.620 ******* 2025-11-23 00:40:14.523114 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:14.523130 | orchestrator | 2025-11-23 00:40:14.523145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.523162 | orchestrator | Sunday 23 November 2025 00:40:12 +0000 (0:00:00.185) 0:00:46.805 ******* 2025-11-23 00:40:14.523179 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67) 2025-11-23 00:40:14.523196 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67) 2025-11-23 00:40:14.523212 | orchestrator | 2025-11-23 00:40:14.523228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.523245 | orchestrator | Sunday 23 November 2025 00:40:12 +0000 (0:00:00.402) 0:00:47.207 ******* 2025-11-23 00:40:14.523262 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa) 2025-11-23 00:40:14.523279 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa) 2025-11-23 00:40:14.523308 | orchestrator | 2025-11-23 00:40:14.523332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.523349 | orchestrator | Sunday 23 November 2025 00:40:13 +0000 (0:00:00.379) 0:00:47.587 ******* 2025-11-23 00:40:14.523366 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b) 2025-11-23 00:40:14.523382 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b) 2025-11-23 00:40:14.523397 | orchestrator | 2025-11-23 00:40:14.523414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.523430 | orchestrator | Sunday 23 November 2025 00:40:13 +0000 (0:00:00.381) 0:00:47.968 ******* 2025-11-23 00:40:14.523444 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f) 2025-11-23 00:40:14.523459 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f) 2025-11-23 00:40:14.523474 | orchestrator | 2025-11-23 00:40:14.523490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-23 00:40:14.523530 | orchestrator | Sunday 23 November 2025 00:40:13 +0000 (0:00:00.348) 0:00:48.317 ******* 2025-11-23 00:40:14.523547 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-23 00:40:14.523563 | orchestrator | 2025-11-23 00:40:14.523577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:14.523595 | orchestrator | Sunday 23 November 2025 00:40:14 +0000 (0:00:00.289) 0:00:48.606 ******* 2025-11-23 00:40:14.523611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-23 00:40:14.523626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-23 00:40:14.523642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-23 00:40:14.523659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-23 00:40:14.523674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-23 00:40:14.523687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-23 00:40:14.523697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-23 00:40:14.523706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-23 00:40:14.523715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-23 00:40:14.523725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-23 00:40:14.523734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-23 00:40:14.523758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-23 00:40:22.453231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-23 00:40:22.453368 | orchestrator | 2025-11-23 00:40:22.453396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.453418 | orchestrator | Sunday 23 November 2025 00:40:14 +0000 (0:00:00.416) 0:00:49.023 ******* 2025-11-23 00:40:22.453437 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.453469 | orchestrator | 2025-11-23 00:40:22.453490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.453553 | orchestrator | Sunday 23 November 2025 00:40:14 +0000 (0:00:00.188) 0:00:49.212 ******* 2025-11-23 00:40:22.453575 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.453592 | orchestrator | 2025-11-23 00:40:22.453609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.453657 | orchestrator | Sunday 23 November 2025 00:40:15 +0000 (0:00:00.446) 0:00:49.658 ******* 2025-11-23 00:40:22.453676 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.453694 | orchestrator | 2025-11-23 00:40:22.453713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.453732 | orchestrator | Sunday 23 November 2025 00:40:15 +0000 (0:00:00.178) 0:00:49.836 ******* 2025-11-23 00:40:22.453750 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.453769 | orchestrator | 2025-11-23 00:40:22.453787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.453807 | orchestrator | Sunday 23 November 2025 00:40:15 +0000 (0:00:00.183) 0:00:50.020 ******* 2025-11-23 00:40:22.453826 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.453846 | orchestrator | 2025-11-23 00:40:22.453864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.453883 | orchestrator | Sunday 23 November 2025 00:40:15 +0000 (0:00:00.183) 0:00:50.203 ******* 2025-11-23 00:40:22.453902 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.453922 | orchestrator | 2025-11-23 00:40:22.453941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.453961 | orchestrator | Sunday 23 November 2025 00:40:15 +0000 (0:00:00.180) 0:00:50.383 ******* 2025-11-23 00:40:22.453981 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.454001 | orchestrator | 2025-11-23 00:40:22.454082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.454106 | orchestrator | Sunday 23 November 2025 00:40:16 +0000 (0:00:00.204) 0:00:50.588 ******* 2025-11-23 00:40:22.454127 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.454145 | orchestrator | 2025-11-23 00:40:22.454164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.454200 | orchestrator | Sunday 23 November 2025 00:40:16 +0000 (0:00:00.166) 0:00:50.754 ******* 2025-11-23 00:40:22.454220 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-23 00:40:22.454241 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-23 00:40:22.454260 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-23 00:40:22.454280 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-23 00:40:22.454299 | orchestrator | 2025-11-23 00:40:22.454318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.454339 | orchestrator | Sunday 23 November 2025 00:40:16 +0000 (0:00:00.556) 0:00:51.310 ******* 2025-11-23 00:40:22.454358 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.454377 | orchestrator | 2025-11-23 00:40:22.454396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.454418 | orchestrator | Sunday 23 November 2025 00:40:16 +0000 (0:00:00.182) 0:00:51.493 ******* 2025-11-23 00:40:22.454438 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.454458 | orchestrator | 2025-11-23 00:40:22.454478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.454496 | orchestrator | Sunday 23 November 2025 00:40:17 +0000 (0:00:00.196) 0:00:51.690 ******* 2025-11-23 00:40:22.454546 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.454565 | orchestrator | 2025-11-23 00:40:22.454582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-23 00:40:22.454600 | orchestrator | Sunday 23 November 2025 00:40:17 +0000 (0:00:00.177) 0:00:51.868 ******* 2025-11-23 00:40:22.454618 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.454636 | orchestrator | 2025-11-23 00:40:22.454654 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-23 00:40:22.454672 | orchestrator | Sunday 23 November 2025 00:40:17 +0000 (0:00:00.162) 0:00:52.031 ******* 2025-11-23 00:40:22.454689 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.454707 | orchestrator | 2025-11-23 00:40:22.454726 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-23 00:40:22.454765 | orchestrator | Sunday 23 November 2025 00:40:17 +0000 (0:00:00.232) 0:00:52.263 ******* 2025-11-23 00:40:22.454784 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77b7216-a915-581b-8f3c-a7fc3e50862f'}}) 2025-11-23 00:40:22.454802 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}}) 2025-11-23 00:40:22.454819 | orchestrator | 2025-11-23 00:40:22.454835 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-23 00:40:22.454851 | orchestrator | Sunday 23 November 2025 00:40:17 +0000 (0:00:00.185) 0:00:52.448 ******* 2025-11-23 00:40:22.454870 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'}) 2025-11-23 00:40:22.454888 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}) 2025-11-23 00:40:22.454905 | orchestrator | 2025-11-23 00:40:22.454922 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-23 00:40:22.454971 | orchestrator | Sunday 23 November 2025 00:40:19 +0000 (0:00:01.828) 0:00:54.277 ******* 2025-11-23 00:40:22.454990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:22.455008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:22.455025 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455042 | orchestrator | 2025-11-23 00:40:22.455062 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-23 00:40:22.455079 | orchestrator | Sunday 23 November 2025 00:40:19 +0000 (0:00:00.145) 0:00:54.423 ******* 2025-11-23 00:40:22.455098 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'}) 2025-11-23 00:40:22.455116 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}) 2025-11-23 00:40:22.455133 | orchestrator | 2025-11-23 00:40:22.455151 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-23 00:40:22.455169 | orchestrator | Sunday 23 November 2025 00:40:21 +0000 (0:00:01.289) 0:00:55.712 ******* 2025-11-23 00:40:22.455187 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:22.455205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:22.455316 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455341 | orchestrator | 2025-11-23 00:40:22.455357 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-23 00:40:22.455429 | orchestrator | Sunday 23 November 2025 00:40:21 +0000 (0:00:00.141) 0:00:55.853 ******* 2025-11-23 00:40:22.455445 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455462 | orchestrator | 2025-11-23 00:40:22.455480 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-23 00:40:22.455497 | orchestrator | Sunday 23 November 2025 00:40:21 +0000 (0:00:00.114) 0:00:55.968 ******* 2025-11-23 00:40:22.455559 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:22.455578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:22.455596 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455629 | orchestrator | 2025-11-23 00:40:22.455647 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-23 00:40:22.455664 | orchestrator | Sunday 23 November 2025 00:40:21 +0000 (0:00:00.147) 0:00:56.116 ******* 2025-11-23 00:40:22.455681 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455700 | orchestrator | 2025-11-23 00:40:22.455717 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-23 00:40:22.455734 | orchestrator | Sunday 23 November 2025 00:40:21 +0000 (0:00:00.127) 0:00:56.243 ******* 2025-11-23 00:40:22.455752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:22.455770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:22.455788 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455807 | orchestrator | 2025-11-23 00:40:22.455825 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-23 00:40:22.455843 | orchestrator | Sunday 23 November 2025 00:40:21 +0000 (0:00:00.121) 0:00:56.365 ******* 2025-11-23 00:40:22.455861 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455879 | orchestrator | 2025-11-23 00:40:22.455897 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-23 00:40:22.455917 | orchestrator | Sunday 23 November 2025 00:40:21 +0000 (0:00:00.120) 0:00:56.485 ******* 2025-11-23 00:40:22.455936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:22.455956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:22.455971 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:22.455990 | orchestrator | 2025-11-23 00:40:22.456008 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-23 00:40:22.456023 | orchestrator | Sunday 23 November 2025 00:40:22 +0000 (0:00:00.125) 0:00:56.611 ******* 2025-11-23 00:40:22.456039 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:22.456055 | orchestrator | 2025-11-23 00:40:22.456071 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-23 00:40:22.456086 | orchestrator | Sunday 23 November 2025 00:40:22 +0000 (0:00:00.227) 0:00:56.838 ******* 2025-11-23 00:40:22.456121 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:27.684882 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:27.684981 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.684995 | orchestrator | 2025-11-23 00:40:27.685006 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-23 00:40:27.685017 | orchestrator | Sunday 23 November 2025 00:40:22 +0000 (0:00:00.125) 0:00:56.964 ******* 2025-11-23 00:40:27.685026 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:27.685036 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:27.685044 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685053 | orchestrator | 2025-11-23 00:40:27.685062 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-23 00:40:27.685070 | orchestrator | Sunday 23 November 2025 00:40:22 +0000 (0:00:00.150) 0:00:57.114 ******* 2025-11-23 00:40:27.685079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:27.685110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:27.685119 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685127 | orchestrator | 2025-11-23 00:40:27.685136 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-23 00:40:27.685144 | orchestrator | Sunday 23 November 2025 00:40:22 +0000 (0:00:00.146) 0:00:57.261 ******* 2025-11-23 00:40:27.685153 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685161 | orchestrator | 2025-11-23 00:40:27.685170 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-23 00:40:27.685178 | orchestrator | Sunday 23 November 2025 00:40:22 +0000 (0:00:00.110) 0:00:57.371 ******* 2025-11-23 00:40:27.685186 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685195 | orchestrator | 2025-11-23 00:40:27.685203 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-23 00:40:27.685212 | orchestrator | Sunday 23 November 2025 00:40:22 +0000 (0:00:00.113) 0:00:57.485 ******* 2025-11-23 00:40:27.685221 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685229 | orchestrator | 2025-11-23 00:40:27.685238 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-23 00:40:27.685246 | orchestrator | Sunday 23 November 2025 00:40:23 +0000 (0:00:00.125) 0:00:57.610 ******* 2025-11-23 00:40:27.685255 | orchestrator | ok: [testbed-node-5] => { 2025-11-23 00:40:27.685264 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-23 00:40:27.685273 | orchestrator | } 2025-11-23 00:40:27.685281 | orchestrator | 2025-11-23 00:40:27.685290 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-23 00:40:27.685298 | orchestrator | Sunday 23 November 2025 00:40:23 +0000 (0:00:00.123) 0:00:57.733 ******* 2025-11-23 00:40:27.685307 | orchestrator | ok: [testbed-node-5] => { 2025-11-23 00:40:27.685316 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-23 00:40:27.685324 | orchestrator | } 2025-11-23 00:40:27.685333 | orchestrator | 2025-11-23 00:40:27.685341 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-23 00:40:27.685350 | orchestrator | Sunday 23 November 2025 00:40:23 +0000 (0:00:00.107) 0:00:57.841 ******* 2025-11-23 00:40:27.685358 | orchestrator | ok: [testbed-node-5] => { 2025-11-23 00:40:27.685367 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-23 00:40:27.685375 | orchestrator | } 2025-11-23 00:40:27.685384 | orchestrator | 2025-11-23 00:40:27.685392 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-23 00:40:27.685400 | orchestrator | Sunday 23 November 2025 00:40:23 +0000 (0:00:00.126) 0:00:57.968 ******* 2025-11-23 00:40:27.685409 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:27.685418 | orchestrator | 2025-11-23 00:40:27.685428 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-23 00:40:27.685438 | orchestrator | Sunday 23 November 2025 00:40:23 +0000 (0:00:00.486) 0:00:58.454 ******* 2025-11-23 00:40:27.685448 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:27.685457 | orchestrator | 2025-11-23 00:40:27.685467 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-23 00:40:27.685476 | orchestrator | Sunday 23 November 2025 00:40:24 +0000 (0:00:00.475) 0:00:58.929 ******* 2025-11-23 00:40:27.685486 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:27.685496 | orchestrator | 2025-11-23 00:40:27.685505 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-23 00:40:27.685541 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.604) 0:00:59.534 ******* 2025-11-23 00:40:27.685558 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:27.685571 | orchestrator | 2025-11-23 00:40:27.685585 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-23 00:40:27.685595 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.149) 0:00:59.683 ******* 2025-11-23 00:40:27.685612 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685622 | orchestrator | 2025-11-23 00:40:27.685648 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-23 00:40:27.685659 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.090) 0:00:59.774 ******* 2025-11-23 00:40:27.685669 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685678 | orchestrator | 2025-11-23 00:40:27.685688 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-23 00:40:27.685698 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.084) 0:00:59.858 ******* 2025-11-23 00:40:27.685708 | orchestrator | ok: [testbed-node-5] => { 2025-11-23 00:40:27.685718 | orchestrator |  "vgs_report": { 2025-11-23 00:40:27.685728 | orchestrator |  "vg": [] 2025-11-23 00:40:27.685753 | orchestrator |  } 2025-11-23 00:40:27.685777 | orchestrator | } 2025-11-23 00:40:27.685797 | orchestrator | 2025-11-23 00:40:27.685806 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-23 00:40:27.685815 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.116) 0:00:59.974 ******* 2025-11-23 00:40:27.685823 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685832 | orchestrator | 2025-11-23 00:40:27.685840 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-23 00:40:27.685849 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.109) 0:01:00.084 ******* 2025-11-23 00:40:27.685858 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685866 | orchestrator | 2025-11-23 00:40:27.685875 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-23 00:40:27.685883 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.133) 0:01:00.218 ******* 2025-11-23 00:40:27.685892 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685900 | orchestrator | 2025-11-23 00:40:27.685909 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-23 00:40:27.685934 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.112) 0:01:00.331 ******* 2025-11-23 00:40:27.685943 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685952 | orchestrator | 2025-11-23 00:40:27.685960 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-23 00:40:27.685969 | orchestrator | Sunday 23 November 2025 00:40:25 +0000 (0:00:00.123) 0:01:00.455 ******* 2025-11-23 00:40:27.685978 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.685986 | orchestrator | 2025-11-23 00:40:27.685995 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-23 00:40:27.686003 | orchestrator | Sunday 23 November 2025 00:40:26 +0000 (0:00:00.126) 0:01:00.581 ******* 2025-11-23 00:40:27.686012 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686069 | orchestrator | 2025-11-23 00:40:27.686078 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-23 00:40:27.686087 | orchestrator | Sunday 23 November 2025 00:40:26 +0000 (0:00:00.119) 0:01:00.701 ******* 2025-11-23 00:40:27.686095 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686104 | orchestrator | 2025-11-23 00:40:27.686112 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-23 00:40:27.686121 | orchestrator | Sunday 23 November 2025 00:40:26 +0000 (0:00:00.118) 0:01:00.820 ******* 2025-11-23 00:40:27.686130 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686141 | orchestrator | 2025-11-23 00:40:27.686151 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-23 00:40:27.686168 | orchestrator | Sunday 23 November 2025 00:40:26 +0000 (0:00:00.243) 0:01:01.063 ******* 2025-11-23 00:40:27.686179 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686190 | orchestrator | 2025-11-23 00:40:27.686200 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-23 00:40:27.686211 | orchestrator | Sunday 23 November 2025 00:40:26 +0000 (0:00:00.114) 0:01:01.177 ******* 2025-11-23 00:40:27.686222 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686241 | orchestrator | 2025-11-23 00:40:27.686252 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-23 00:40:27.686262 | orchestrator | Sunday 23 November 2025 00:40:26 +0000 (0:00:00.114) 0:01:01.292 ******* 2025-11-23 00:40:27.686287 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686308 | orchestrator | 2025-11-23 00:40:27.686319 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-23 00:40:27.686330 | orchestrator | Sunday 23 November 2025 00:40:26 +0000 (0:00:00.126) 0:01:01.419 ******* 2025-11-23 00:40:27.686341 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686351 | orchestrator | 2025-11-23 00:40:27.686362 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-23 00:40:27.686372 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.123) 0:01:01.542 ******* 2025-11-23 00:40:27.686383 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686393 | orchestrator | 2025-11-23 00:40:27.686404 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-23 00:40:27.686415 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.121) 0:01:01.663 ******* 2025-11-23 00:40:27.686425 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686436 | orchestrator | 2025-11-23 00:40:27.686447 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-23 00:40:27.686457 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.124) 0:01:01.788 ******* 2025-11-23 00:40:27.686468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:27.686479 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:27.686490 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686500 | orchestrator | 2025-11-23 00:40:27.686535 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-23 00:40:27.686549 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.130) 0:01:01.918 ******* 2025-11-23 00:40:27.686560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:27.686571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:27.686582 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:27.686592 | orchestrator | 2025-11-23 00:40:27.686603 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-23 00:40:27.686614 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.145) 0:01:02.064 ******* 2025-11-23 00:40:27.686634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358356 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358443 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358453 | orchestrator | 2025-11-23 00:40:30.358461 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-23 00:40:30.358469 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.131) 0:01:02.196 ******* 2025-11-23 00:40:30.358476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358490 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358497 | orchestrator | 2025-11-23 00:40:30.358560 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-23 00:40:30.358568 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.140) 0:01:02.336 ******* 2025-11-23 00:40:30.358576 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358590 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358597 | orchestrator | 2025-11-23 00:40:30.358603 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-23 00:40:30.358610 | orchestrator | Sunday 23 November 2025 00:40:27 +0000 (0:00:00.141) 0:01:02.477 ******* 2025-11-23 00:40:30.358616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358644 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358651 | orchestrator | 2025-11-23 00:40:30.358658 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-23 00:40:30.358665 | orchestrator | Sunday 23 November 2025 00:40:28 +0000 (0:00:00.247) 0:01:02.724 ******* 2025-11-23 00:40:30.358672 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358686 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358692 | orchestrator | 2025-11-23 00:40:30.358699 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-23 00:40:30.358706 | orchestrator | Sunday 23 November 2025 00:40:28 +0000 (0:00:00.149) 0:01:02.874 ******* 2025-11-23 00:40:30.358713 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358721 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358728 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358734 | orchestrator | 2025-11-23 00:40:30.358740 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-23 00:40:30.358746 | orchestrator | Sunday 23 November 2025 00:40:28 +0000 (0:00:00.137) 0:01:03.011 ******* 2025-11-23 00:40:30.358752 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:30.358759 | orchestrator | 2025-11-23 00:40:30.358765 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-23 00:40:30.358771 | orchestrator | Sunday 23 November 2025 00:40:28 +0000 (0:00:00.484) 0:01:03.496 ******* 2025-11-23 00:40:30.358777 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:30.358783 | orchestrator | 2025-11-23 00:40:30.358790 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-23 00:40:30.358797 | orchestrator | Sunday 23 November 2025 00:40:29 +0000 (0:00:00.503) 0:01:03.999 ******* 2025-11-23 00:40:30.358803 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:30.358811 | orchestrator | 2025-11-23 00:40:30.358818 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-23 00:40:30.358824 | orchestrator | Sunday 23 November 2025 00:40:29 +0000 (0:00:00.139) 0:01:04.138 ******* 2025-11-23 00:40:30.358831 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'vg_name': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}) 2025-11-23 00:40:30.358846 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'vg_name': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'}) 2025-11-23 00:40:30.358853 | orchestrator | 2025-11-23 00:40:30.358860 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-23 00:40:30.358867 | orchestrator | Sunday 23 November 2025 00:40:29 +0000 (0:00:00.154) 0:01:04.293 ******* 2025-11-23 00:40:30.358889 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358904 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358911 | orchestrator | 2025-11-23 00:40:30.358920 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-23 00:40:30.358928 | orchestrator | Sunday 23 November 2025 00:40:29 +0000 (0:00:00.140) 0:01:04.433 ******* 2025-11-23 00:40:30.358936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358943 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358951 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.358958 | orchestrator | 2025-11-23 00:40:30.358966 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-23 00:40:30.358973 | orchestrator | Sunday 23 November 2025 00:40:30 +0000 (0:00:00.143) 0:01:04.576 ******* 2025-11-23 00:40:30.358981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'})  2025-11-23 00:40:30.358988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'})  2025-11-23 00:40:30.358996 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:30.359003 | orchestrator | 2025-11-23 00:40:30.359011 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-23 00:40:30.359018 | orchestrator | Sunday 23 November 2025 00:40:30 +0000 (0:00:00.139) 0:01:04.716 ******* 2025-11-23 00:40:30.359026 | orchestrator | ok: [testbed-node-5] => { 2025-11-23 00:40:30.359033 | orchestrator |  "lvm_report": { 2025-11-23 00:40:30.359047 | orchestrator |  "lv": [ 2025-11-23 00:40:30.359056 | orchestrator |  { 2025-11-23 00:40:30.359063 | orchestrator |  "lv_name": "osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a", 2025-11-23 00:40:30.359071 | orchestrator |  "vg_name": "ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a" 2025-11-23 00:40:30.359078 | orchestrator |  }, 2025-11-23 00:40:30.359084 | orchestrator |  { 2025-11-23 00:40:30.359091 | orchestrator |  "lv_name": "osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f", 2025-11-23 00:40:30.359098 | orchestrator |  "vg_name": "ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f" 2025-11-23 00:40:30.359105 | orchestrator |  } 2025-11-23 00:40:30.359113 | orchestrator |  ], 2025-11-23 00:40:30.359120 | orchestrator |  "pv": [ 2025-11-23 00:40:30.359127 | orchestrator |  { 2025-11-23 00:40:30.359135 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-23 00:40:30.359142 | orchestrator |  "vg_name": "ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f" 2025-11-23 00:40:30.359150 | orchestrator |  }, 2025-11-23 00:40:30.359156 | orchestrator |  { 2025-11-23 00:40:30.359164 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-23 00:40:30.359172 | orchestrator |  "vg_name": "ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a" 2025-11-23 00:40:30.359185 | orchestrator |  } 2025-11-23 00:40:30.359192 | orchestrator |  ] 2025-11-23 00:40:30.359198 | orchestrator |  } 2025-11-23 00:40:30.359205 | orchestrator | } 2025-11-23 00:40:30.359212 | orchestrator | 2025-11-23 00:40:30.359218 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:40:30.359224 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-23 00:40:30.359231 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-23 00:40:30.359238 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-23 00:40:30.359245 | orchestrator | 2025-11-23 00:40:30.359251 | orchestrator | 2025-11-23 00:40:30.359257 | orchestrator | 2025-11-23 00:40:30.359263 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:40:30.359270 | orchestrator | Sunday 23 November 2025 00:40:30 +0000 (0:00:00.133) 0:01:04.849 ******* 2025-11-23 00:40:30.359277 | orchestrator | =============================================================================== 2025-11-23 00:40:30.359283 | orchestrator | Create block VGs -------------------------------------------------------- 5.48s 2025-11-23 00:40:30.359289 | orchestrator | Create block LVs -------------------------------------------------------- 4.01s 2025-11-23 00:40:30.359296 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.61s 2025-11-23 00:40:30.359302 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2025-11-23 00:40:30.359309 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2025-11-23 00:40:30.359317 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-11-23 00:40:30.359324 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2025-11-23 00:40:30.359331 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.42s 2025-11-23 00:40:30.359345 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-11-23 00:40:30.586921 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-11-23 00:40:30.586997 | orchestrator | Print LVM report data --------------------------------------------------- 0.77s 2025-11-23 00:40:30.587004 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-11-23 00:40:30.587009 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2025-11-23 00:40:30.587014 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-11-23 00:40:30.587019 | orchestrator | Get initial list of available block devices ----------------------------- 0.61s 2025-11-23 00:40:30.587023 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.61s 2025-11-23 00:40:30.587028 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-11-23 00:40:30.587032 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2025-11-23 00:40:30.587047 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.55s 2025-11-23 00:40:30.587052 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.55s 2025-11-23 00:40:42.537298 | orchestrator | 2025-11-23 00:40:42 | INFO  | Task ee4e7a42-dc1e-43a4-80f4-e8919ee79457 (facts) was prepared for execution. 2025-11-23 00:40:42.537382 | orchestrator | 2025-11-23 00:40:42 | INFO  | It takes a moment until task ee4e7a42-dc1e-43a4-80f4-e8919ee79457 (facts) has been started and output is visible here. 2025-11-23 00:40:53.715095 | orchestrator | 2025-11-23 00:40:53.715205 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-23 00:40:53.715241 | orchestrator | 2025-11-23 00:40:53.715254 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-23 00:40:53.715293 | orchestrator | Sunday 23 November 2025 00:40:46 +0000 (0:00:00.232) 0:00:00.232 ******* 2025-11-23 00:40:53.715305 | orchestrator | ok: [testbed-manager] 2025-11-23 00:40:53.715317 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:40:53.715329 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:40:53.715339 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:40:53.715350 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:40:53.715361 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:53.715372 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:53.715383 | orchestrator | 2025-11-23 00:40:53.715396 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-23 00:40:53.715407 | orchestrator | Sunday 23 November 2025 00:40:47 +0000 (0:00:00.966) 0:00:01.199 ******* 2025-11-23 00:40:53.715419 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:40:53.715431 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:40:53.715442 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:40:53.715453 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:40:53.715464 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:40:53.715475 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:53.715486 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:53.715497 | orchestrator | 2025-11-23 00:40:53.715508 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-23 00:40:53.715519 | orchestrator | 2025-11-23 00:40:53.715549 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-23 00:40:53.715562 | orchestrator | Sunday 23 November 2025 00:40:48 +0000 (0:00:01.077) 0:00:02.277 ******* 2025-11-23 00:40:53.715573 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:40:53.715585 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:40:53.715596 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:40:53.715608 | orchestrator | ok: [testbed-manager] 2025-11-23 00:40:53.715619 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:40:53.715631 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:40:53.715644 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:40:53.715657 | orchestrator | 2025-11-23 00:40:53.715670 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-23 00:40:53.715682 | orchestrator | 2025-11-23 00:40:53.715696 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-23 00:40:53.715710 | orchestrator | Sunday 23 November 2025 00:40:53 +0000 (0:00:04.688) 0:00:06.965 ******* 2025-11-23 00:40:53.715723 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:40:53.715736 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:40:53.715750 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:40:53.715763 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:40:53.715776 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:40:53.715789 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:40:53.715802 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:40:53.715815 | orchestrator | 2025-11-23 00:40:53.715828 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:40:53.715842 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:40:53.715856 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:40:53.715870 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:40:53.715883 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:40:53.715896 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:40:53.715909 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:40:53.715930 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:40:53.715944 | orchestrator | 2025-11-23 00:40:53.715957 | orchestrator | 2025-11-23 00:40:53.715970 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:40:53.715983 | orchestrator | Sunday 23 November 2025 00:40:53 +0000 (0:00:00.455) 0:00:07.421 ******* 2025-11-23 00:40:53.715995 | orchestrator | =============================================================================== 2025-11-23 00:40:53.716007 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.69s 2025-11-23 00:40:53.716018 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-11-23 00:40:53.716030 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2025-11-23 00:40:53.716041 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-11-23 00:41:05.811094 | orchestrator | 2025-11-23 00:41:05 | INFO  | Task 8c11b982-2324-48dc-9de6-e2251f57b466 (frr) was prepared for execution. 2025-11-23 00:41:05.811284 | orchestrator | 2025-11-23 00:41:05 | INFO  | It takes a moment until task 8c11b982-2324-48dc-9de6-e2251f57b466 (frr) has been started and output is visible here. 2025-11-23 00:41:29.649640 | orchestrator | 2025-11-23 00:41:29.649737 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-11-23 00:41:29.649750 | orchestrator | 2025-11-23 00:41:29.649759 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-11-23 00:41:29.649768 | orchestrator | Sunday 23 November 2025 00:41:09 +0000 (0:00:00.205) 0:00:00.205 ******* 2025-11-23 00:41:29.649776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-11-23 00:41:29.649785 | orchestrator | 2025-11-23 00:41:29.649794 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-11-23 00:41:29.649801 | orchestrator | Sunday 23 November 2025 00:41:09 +0000 (0:00:00.199) 0:00:00.405 ******* 2025-11-23 00:41:29.649809 | orchestrator | changed: [testbed-manager] 2025-11-23 00:41:29.649818 | orchestrator | 2025-11-23 00:41:29.649839 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-11-23 00:41:29.649848 | orchestrator | Sunday 23 November 2025 00:41:10 +0000 (0:00:01.074) 0:00:01.479 ******* 2025-11-23 00:41:29.649856 | orchestrator | changed: [testbed-manager] 2025-11-23 00:41:29.649863 | orchestrator | 2025-11-23 00:41:29.649871 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-11-23 00:41:29.649879 | orchestrator | Sunday 23 November 2025 00:41:19 +0000 (0:00:09.137) 0:00:10.617 ******* 2025-11-23 00:41:29.649887 | orchestrator | ok: [testbed-manager] 2025-11-23 00:41:29.649896 | orchestrator | 2025-11-23 00:41:29.649903 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-11-23 00:41:29.649911 | orchestrator | Sunday 23 November 2025 00:41:20 +0000 (0:00:00.909) 0:00:11.526 ******* 2025-11-23 00:41:29.649919 | orchestrator | changed: [testbed-manager] 2025-11-23 00:41:29.649927 | orchestrator | 2025-11-23 00:41:29.649934 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-11-23 00:41:29.649942 | orchestrator | Sunday 23 November 2025 00:41:21 +0000 (0:00:00.893) 0:00:12.420 ******* 2025-11-23 00:41:29.649950 | orchestrator | ok: [testbed-manager] 2025-11-23 00:41:29.649958 | orchestrator | 2025-11-23 00:41:29.649966 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-11-23 00:41:29.649974 | orchestrator | Sunday 23 November 2025 00:41:22 +0000 (0:00:01.048) 0:00:13.468 ******* 2025-11-23 00:41:29.649982 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:41:29.649990 | orchestrator | 2025-11-23 00:41:29.649998 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2025-11-23 00:41:29.650073 | orchestrator | Sunday 23 November 2025 00:41:22 +0000 (0:00:00.125) 0:00:13.594 ******* 2025-11-23 00:41:29.650083 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:41:29.650091 | orchestrator | 2025-11-23 00:41:29.650098 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2025-11-23 00:41:29.650106 | orchestrator | Sunday 23 November 2025 00:41:23 +0000 (0:00:00.147) 0:00:13.741 ******* 2025-11-23 00:41:29.650114 | orchestrator | changed: [testbed-manager] 2025-11-23 00:41:29.650122 | orchestrator | 2025-11-23 00:41:29.650130 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-11-23 00:41:29.650138 | orchestrator | Sunday 23 November 2025 00:41:23 +0000 (0:00:00.796) 0:00:14.537 ******* 2025-11-23 00:41:29.650153 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-11-23 00:41:29.650166 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-11-23 00:41:29.650181 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-11-23 00:41:29.650194 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-11-23 00:41:29.650208 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-11-23 00:41:29.650222 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-11-23 00:41:29.650235 | orchestrator | 2025-11-23 00:41:29.650249 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-11-23 00:41:29.650263 | orchestrator | Sunday 23 November 2025 00:41:25 +0000 (0:00:01.958) 0:00:16.496 ******* 2025-11-23 00:41:29.650278 | orchestrator | ok: [testbed-manager] 2025-11-23 00:41:29.650292 | orchestrator | 2025-11-23 00:41:29.650306 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-11-23 00:41:29.650321 | orchestrator | Sunday 23 November 2025 00:41:27 +0000 (0:00:01.287) 0:00:17.784 ******* 2025-11-23 00:41:29.650334 | orchestrator | changed: [testbed-manager] 2025-11-23 00:41:29.650347 | orchestrator | 2025-11-23 00:41:29.650356 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:41:29.650364 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:41:29.650372 | orchestrator | 2025-11-23 00:41:29.650380 | orchestrator | 2025-11-23 00:41:29.650388 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:41:29.650396 | orchestrator | Sunday 23 November 2025 00:41:29 +0000 (0:00:02.329) 0:00:20.113 ******* 2025-11-23 00:41:29.650403 | orchestrator | =============================================================================== 2025-11-23 00:41:29.650411 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.14s 2025-11-23 00:41:29.650419 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 2.33s 2025-11-23 00:41:29.650426 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.96s 2025-11-23 00:41:29.650434 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.29s 2025-11-23 00:41:29.650442 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.07s 2025-11-23 00:41:29.650466 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.05s 2025-11-23 00:41:29.650474 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.91s 2025-11-23 00:41:29.650482 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.89s 2025-11-23 00:41:29.650489 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.80s 2025-11-23 00:41:29.650497 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2025-11-23 00:41:29.650505 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2025-11-23 00:41:29.650521 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2025-11-23 00:41:29.833142 | orchestrator | 2025-11-23 00:41:29.836274 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Nov 23 00:41:29 UTC 2025 2025-11-23 00:41:29.836317 | orchestrator | 2025-11-23 00:41:31.507361 | orchestrator | 2025-11-23 00:41:31 | INFO  | Collection nutshell is prepared for execution 2025-11-23 00:41:31.507459 | orchestrator | 2025-11-23 00:41:31 | INFO  | A [0] - dotfiles 2025-11-23 00:41:41.597895 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [0] - homer 2025-11-23 00:41:41.598106 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [0] - netdata 2025-11-23 00:41:41.598130 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [0] - openstackclient 2025-11-23 00:41:41.598142 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [0] - phpmyadmin 2025-11-23 00:41:41.598154 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [0] - common 2025-11-23 00:41:41.601750 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- loadbalancer 2025-11-23 00:41:41.601787 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [2] --- opensearch 2025-11-23 00:41:41.601799 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [2] --- mariadb-ng 2025-11-23 00:41:41.602110 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [3] ---- horizon 2025-11-23 00:41:41.602133 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [3] ---- keystone 2025-11-23 00:41:41.602438 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- neutron 2025-11-23 00:41:41.602459 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [5] ------ wait-for-nova 2025-11-23 00:41:41.602779 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [6] ------- octavia 2025-11-23 00:41:41.604251 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- barbican 2025-11-23 00:41:41.604578 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- designate 2025-11-23 00:41:41.604608 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- ironic 2025-11-23 00:41:41.604627 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- placement 2025-11-23 00:41:41.604646 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- magnum 2025-11-23 00:41:41.605394 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- openvswitch 2025-11-23 00:41:41.605418 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [2] --- ovn 2025-11-23 00:41:41.605724 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- memcached 2025-11-23 00:41:41.605749 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- redis 2025-11-23 00:41:41.606265 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- rabbitmq-ng 2025-11-23 00:41:41.606340 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [0] - kubernetes 2025-11-23 00:41:41.608531 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- kubeconfig 2025-11-23 00:41:41.608611 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- copy-kubeconfig 2025-11-23 00:41:41.608841 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [0] - ceph 2025-11-23 00:41:41.610890 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [1] -- ceph-pools 2025-11-23 00:41:41.611320 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [2] --- copy-ceph-keys 2025-11-23 00:41:41.611342 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [3] ---- cephclient 2025-11-23 00:41:41.611353 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2025-11-23 00:41:41.611364 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- wait-for-keystone 2025-11-23 00:41:41.611375 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [5] ------ kolla-ceph-rgw 2025-11-23 00:41:41.611415 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [5] ------ glance 2025-11-23 00:41:41.611721 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [5] ------ cinder 2025-11-23 00:41:41.611743 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [5] ------ nova 2025-11-23 00:41:41.612089 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [4] ----- prometheus 2025-11-23 00:41:41.612109 | orchestrator | 2025-11-23 00:41:41 | INFO  | A [5] ------ grafana 2025-11-23 00:41:41.791222 | orchestrator | 2025-11-23 00:41:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-11-23 00:41:41.791333 | orchestrator | 2025-11-23 00:41:41 | INFO  | Tasks are running in the background 2025-11-23 00:41:44.537388 | orchestrator | 2025-11-23 00:41:44 | INFO  | No task IDs specified, wait for all currently running tasks 2025-11-23 00:41:46.636326 | orchestrator | 2025-11-23 00:41:46 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:41:46.636451 | orchestrator | 2025-11-23 00:41:46 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:41:46.636710 | orchestrator | 2025-11-23 00:41:46 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:41:46.637215 | orchestrator | 2025-11-23 00:41:46 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:41:46.637888 | orchestrator | 2025-11-23 00:41:46 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state STARTED 2025-11-23 00:41:46.639899 | orchestrator | 2025-11-23 00:41:46 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:41:46.639964 | orchestrator | 2025-11-23 00:41:46 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:41:46.639977 | orchestrator | 2025-11-23 00:41:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:41:49.762094 | orchestrator | 2025-11-23 00:41:49 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:41:49.762185 | orchestrator | 2025-11-23 00:41:49 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:41:49.764594 | orchestrator | 2025-11-23 00:41:49 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:41:49.764636 | orchestrator | 2025-11-23 00:41:49 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:41:49.764962 | orchestrator | 2025-11-23 00:41:49 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state STARTED 2025-11-23 00:41:49.765549 | orchestrator | 2025-11-23 00:41:49 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:41:49.766271 | orchestrator | 2025-11-23 00:41:49 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:41:49.766298 | orchestrator | 2025-11-23 00:41:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:41:52.797090 | orchestrator | 2025-11-23 00:41:52 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:41:52.797249 | orchestrator | 2025-11-23 00:41:52 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:41:52.797783 | orchestrator | 2025-11-23 00:41:52 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:41:52.798177 | orchestrator | 2025-11-23 00:41:52 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:41:52.798831 | orchestrator | 2025-11-23 00:41:52 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state STARTED 2025-11-23 00:41:52.799319 | orchestrator | 2025-11-23 00:41:52 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:41:52.802086 | orchestrator | 2025-11-23 00:41:52 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:41:52.805002 | orchestrator | 2025-11-23 00:41:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:41:55.864364 | orchestrator | 2025-11-23 00:41:55 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:41:55.864453 | orchestrator | 2025-11-23 00:41:55 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:41:55.864465 | orchestrator | 2025-11-23 00:41:55 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:41:55.864475 | orchestrator | 2025-11-23 00:41:55 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:41:55.864484 | orchestrator | 2025-11-23 00:41:55 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state STARTED 2025-11-23 00:41:55.864493 | orchestrator | 2025-11-23 00:41:55 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:41:55.864501 | orchestrator | 2025-11-23 00:41:55 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:41:55.864510 | orchestrator | 2025-11-23 00:41:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:41:58.977993 | orchestrator | 2025-11-23 00:41:58 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:41:59.016895 | orchestrator | 2025-11-23 00:41:59 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:41:59.021794 | orchestrator | 2025-11-23 00:41:59 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:41:59.029163 | orchestrator | 2025-11-23 00:41:59 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:41:59.031381 | orchestrator | 2025-11-23 00:41:59 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state STARTED 2025-11-23 00:41:59.034312 | orchestrator | 2025-11-23 00:41:59 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:41:59.034749 | orchestrator | 2025-11-23 00:41:59 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:41:59.034773 | orchestrator | 2025-11-23 00:41:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:02.074239 | orchestrator | 2025-11-23 00:42:02 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:02.074351 | orchestrator | 2025-11-23 00:42:02 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:02.076640 | orchestrator | 2025-11-23 00:42:02 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:02.077172 | orchestrator | 2025-11-23 00:42:02 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:02.080547 | orchestrator | 2025-11-23 00:42:02 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state STARTED 2025-11-23 00:42:02.082591 | orchestrator | 2025-11-23 00:42:02 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:02.087194 | orchestrator | 2025-11-23 00:42:02 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:02.087243 | orchestrator | 2025-11-23 00:42:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:05.177708 | orchestrator | 2025-11-23 00:42:05 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:05.178890 | orchestrator | 2025-11-23 00:42:05 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:05.180495 | orchestrator | 2025-11-23 00:42:05 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:05.181098 | orchestrator | 2025-11-23 00:42:05 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:05.183518 | orchestrator | 2025-11-23 00:42:05 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state STARTED 2025-11-23 00:42:05.184013 | orchestrator | 2025-11-23 00:42:05 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:05.184789 | orchestrator | 2025-11-23 00:42:05 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:05.184825 | orchestrator | 2025-11-23 00:42:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:08.448712 | orchestrator | 2025-11-23 00:42:08.448836 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-11-23 00:42:08.448852 | orchestrator | 2025-11-23 00:42:08.448864 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-11-23 00:42:08.448876 | orchestrator | Sunday 23 November 2025 00:41:54 +0000 (0:00:00.924) 0:00:00.924 ******* 2025-11-23 00:42:08.448887 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:42:08.448899 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:42:08.448921 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:42:08.448933 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:42:08.448944 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:42:08.448955 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:42:08.448966 | orchestrator | changed: [testbed-manager] 2025-11-23 00:42:08.448977 | orchestrator | 2025-11-23 00:42:08.448989 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-11-23 00:42:08.448999 | orchestrator | Sunday 23 November 2025 00:41:57 +0000 (0:00:03.539) 0:00:04.463 ******* 2025-11-23 00:42:08.449011 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-23 00:42:08.449023 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-23 00:42:08.449034 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-23 00:42:08.449044 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-23 00:42:08.449055 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-23 00:42:08.449066 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-23 00:42:08.449077 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-23 00:42:08.449089 | orchestrator | 2025-11-23 00:42:08.449100 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-11-23 00:42:08.449111 | orchestrator | Sunday 23 November 2025 00:41:59 +0000 (0:00:01.028) 0:00:05.491 ******* 2025-11-23 00:42:08.449127 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-23 00:41:58.766897', 'end': '2025-11-23 00:41:58.777779', 'delta': '0:00:00.010882', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-23 00:42:08.449152 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-23 00:41:58.700063', 'end': '2025-11-23 00:41:58.705560', 'delta': '0:00:00.005497', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-23 00:42:08.449190 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-23 00:41:58.704548', 'end': '2025-11-23 00:41:58.750399', 'delta': '0:00:00.045851', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-23 00:42:08.449232 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-23 00:41:58.714130', 'end': '2025-11-23 00:41:58.723257', 'delta': '0:00:00.009127', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-23 00:42:08.449598 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-23 00:41:58.711732', 'end': '2025-11-23 00:41:58.721445', 'delta': '0:00:00.009713', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-23 00:42:08.449617 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-23 00:41:58.742123', 'end': '2025-11-23 00:41:58.753122', 'delta': '0:00:00.010999', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-23 00:42:08.449630 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-23 00:41:58.782091', 'end': '2025-11-23 00:41:58.791162', 'delta': '0:00:00.009071', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-23 00:42:08.449658 | orchestrator | 2025-11-23 00:42:08.449670 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-11-23 00:42:08.449682 | orchestrator | Sunday 23 November 2025 00:42:00 +0000 (0:00:01.708) 0:00:07.199 ******* 2025-11-23 00:42:08.449693 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-23 00:42:08.449704 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-23 00:42:08.449715 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-23 00:42:08.449725 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-23 00:42:08.449736 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-23 00:42:08.449747 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf)[0m 2025-11-23 00:42:08.449757 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-23 00:42:08.449768 | orchestrator | 2025-11-23 00:42:08.449779 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-11-23 00:42:08.449789 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:01.183) 0:00:08.383 ******* 2025-11-23 00:42:08.449805 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-11-23 00:42:08.449817 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-11-23 00:42:08.449827 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-11-23 00:42:08.449838 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-11-23 00:42:08.449849 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-11-23 00:42:08.449859 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-11-23 00:42:08.449870 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-11-23 00:42:08.449881 | orchestrator | 2025-11-23 00:42:08.449892 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:42:08.449913 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:42:08.449926 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:42:08.449937 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:42:08.449948 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:42:08.449959 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:42:08.449969 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:42:08.449980 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:42:08.449990 | orchestrator | 2025-11-23 00:42:08.450001 | orchestrator | 2025-11-23 00:42:08.450065 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:42:08.450080 | orchestrator | Sunday 23 November 2025 00:42:05 +0000 (0:00:03.507) 0:00:11.891 ******* 2025-11-23 00:42:08.450091 | orchestrator | =============================================================================== 2025-11-23 00:42:08.450109 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.54s 2025-11-23 00:42:08.450120 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.51s 2025-11-23 00:42:08.450131 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.71s 2025-11-23 00:42:08.450144 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.18s 2025-11-23 00:42:08.450162 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.03s 2025-11-23 00:42:08.450175 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:08.450186 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:08.450197 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:08.450208 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:08.450219 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:08.450230 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task 68ed7549-926a-4ef5-a149-54a0d184b20b is in state SUCCESS 2025-11-23 00:42:08.455597 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:08.459047 | orchestrator | 2025-11-23 00:42:08 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:08.459121 | orchestrator | 2025-11-23 00:42:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:11.564818 | orchestrator | 2025-11-23 00:42:11 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:11.564925 | orchestrator | 2025-11-23 00:42:11 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:11.564940 | orchestrator | 2025-11-23 00:42:11 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:11.564952 | orchestrator | 2025-11-23 00:42:11 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:11.564963 | orchestrator | 2025-11-23 00:42:11 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:11.564974 | orchestrator | 2025-11-23 00:42:11 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:11.564985 | orchestrator | 2025-11-23 00:42:11 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:11.565016 | orchestrator | 2025-11-23 00:42:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:14.544793 | orchestrator | 2025-11-23 00:42:14 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:14.544901 | orchestrator | 2025-11-23 00:42:14 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:14.545459 | orchestrator | 2025-11-23 00:42:14 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:14.546281 | orchestrator | 2025-11-23 00:42:14 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:14.546974 | orchestrator | 2025-11-23 00:42:14 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:14.549001 | orchestrator | 2025-11-23 00:42:14 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:14.549993 | orchestrator | 2025-11-23 00:42:14 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:14.550236 | orchestrator | 2025-11-23 00:42:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:17.616967 | orchestrator | 2025-11-23 00:42:17 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:17.617865 | orchestrator | 2025-11-23 00:42:17 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:17.664476 | orchestrator | 2025-11-23 00:42:17 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:17.665058 | orchestrator | 2025-11-23 00:42:17 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:17.666096 | orchestrator | 2025-11-23 00:42:17 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:17.667168 | orchestrator | 2025-11-23 00:42:17 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:17.667737 | orchestrator | 2025-11-23 00:42:17 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:17.667911 | orchestrator | 2025-11-23 00:42:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:20.800174 | orchestrator | 2025-11-23 00:42:20 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:20.800273 | orchestrator | 2025-11-23 00:42:20 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:20.800288 | orchestrator | 2025-11-23 00:42:20 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:20.800299 | orchestrator | 2025-11-23 00:42:20 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:20.800309 | orchestrator | 2025-11-23 00:42:20 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:20.800319 | orchestrator | 2025-11-23 00:42:20 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:20.800328 | orchestrator | 2025-11-23 00:42:20 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:20.800351 | orchestrator | 2025-11-23 00:42:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:23.806848 | orchestrator | 2025-11-23 00:42:23 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:23.806969 | orchestrator | 2025-11-23 00:42:23 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:23.806996 | orchestrator | 2025-11-23 00:42:23 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:23.807016 | orchestrator | 2025-11-23 00:42:23 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:23.807036 | orchestrator | 2025-11-23 00:42:23 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:23.807054 | orchestrator | 2025-11-23 00:42:23 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:23.807073 | orchestrator | 2025-11-23 00:42:23 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:23.807091 | orchestrator | 2025-11-23 00:42:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:26.858315 | orchestrator | 2025-11-23 00:42:26 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:26.858418 | orchestrator | 2025-11-23 00:42:26 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:26.858454 | orchestrator | 2025-11-23 00:42:26 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:26.858493 | orchestrator | 2025-11-23 00:42:26 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:26.858504 | orchestrator | 2025-11-23 00:42:26 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:26.858515 | orchestrator | 2025-11-23 00:42:26 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:26.858526 | orchestrator | 2025-11-23 00:42:26 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:26.858537 | orchestrator | 2025-11-23 00:42:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:29.916333 | orchestrator | 2025-11-23 00:42:29 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:30.076758 | orchestrator | 2025-11-23 00:42:29 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:30.077928 | orchestrator | 2025-11-23 00:42:29 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:30.077965 | orchestrator | 2025-11-23 00:42:29 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:30.077978 | orchestrator | 2025-11-23 00:42:29 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state STARTED 2025-11-23 00:42:30.077990 | orchestrator | 2025-11-23 00:42:30 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:30.078001 | orchestrator | 2025-11-23 00:42:30 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:30.078013 | orchestrator | 2025-11-23 00:42:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:33.074950 | orchestrator | 2025-11-23 00:42:33 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:33.075058 | orchestrator | 2025-11-23 00:42:33 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:33.075074 | orchestrator | 2025-11-23 00:42:33 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:33.075086 | orchestrator | 2025-11-23 00:42:33 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:33.075098 | orchestrator | 2025-11-23 00:42:33 | INFO  | Task 867e6743-2b37-443f-bad9-a8759a855f93 is in state SUCCESS 2025-11-23 00:42:33.075109 | orchestrator | 2025-11-23 00:42:33 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:33.075120 | orchestrator | 2025-11-23 00:42:33 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:33.075131 | orchestrator | 2025-11-23 00:42:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:36.119014 | orchestrator | 2025-11-23 00:42:36 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:36.119859 | orchestrator | 2025-11-23 00:42:36 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:36.119892 | orchestrator | 2025-11-23 00:42:36 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state STARTED 2025-11-23 00:42:36.120467 | orchestrator | 2025-11-23 00:42:36 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:36.125534 | orchestrator | 2025-11-23 00:42:36 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:36.127839 | orchestrator | 2025-11-23 00:42:36 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:36.127867 | orchestrator | 2025-11-23 00:42:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:39.199669 | orchestrator | 2025-11-23 00:42:39 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:39.208137 | orchestrator | 2025-11-23 00:42:39 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:39.210978 | orchestrator | 2025-11-23 00:42:39 | INFO  | Task d9d07a3d-b15d-41ba-9b62-e35a6602a837 is in state SUCCESS 2025-11-23 00:42:39.212516 | orchestrator | 2025-11-23 00:42:39 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:39.215312 | orchestrator | 2025-11-23 00:42:39 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:39.217257 | orchestrator | 2025-11-23 00:42:39 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:39.217306 | orchestrator | 2025-11-23 00:42:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:42.264407 | orchestrator | 2025-11-23 00:42:42 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:42.265509 | orchestrator | 2025-11-23 00:42:42 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:42.266125 | orchestrator | 2025-11-23 00:42:42 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:42.266703 | orchestrator | 2025-11-23 00:42:42 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:42.267665 | orchestrator | 2025-11-23 00:42:42 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:42.267748 | orchestrator | 2025-11-23 00:42:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:45.294653 | orchestrator | 2025-11-23 00:42:45 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:45.295224 | orchestrator | 2025-11-23 00:42:45 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:45.296546 | orchestrator | 2025-11-23 00:42:45 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:45.298157 | orchestrator | 2025-11-23 00:42:45 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:45.299234 | orchestrator | 2025-11-23 00:42:45 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:45.299275 | orchestrator | 2025-11-23 00:42:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:48.340437 | orchestrator | 2025-11-23 00:42:48 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:48.342139 | orchestrator | 2025-11-23 00:42:48 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:48.343964 | orchestrator | 2025-11-23 00:42:48 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:48.345906 | orchestrator | 2025-11-23 00:42:48 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:48.347409 | orchestrator | 2025-11-23 00:42:48 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:48.347527 | orchestrator | 2025-11-23 00:42:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:51.434964 | orchestrator | 2025-11-23 00:42:51 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:51.435082 | orchestrator | 2025-11-23 00:42:51 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:51.435103 | orchestrator | 2025-11-23 00:42:51 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:51.435158 | orchestrator | 2025-11-23 00:42:51 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:51.435176 | orchestrator | 2025-11-23 00:42:51 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:51.435193 | orchestrator | 2025-11-23 00:42:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:54.488259 | orchestrator | 2025-11-23 00:42:54 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:54.491220 | orchestrator | 2025-11-23 00:42:54 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:54.496156 | orchestrator | 2025-11-23 00:42:54 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:54.504383 | orchestrator | 2025-11-23 00:42:54 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:54.516977 | orchestrator | 2025-11-23 00:42:54 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:54.517028 | orchestrator | 2025-11-23 00:42:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:42:57.547402 | orchestrator | 2025-11-23 00:42:57 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:42:57.547802 | orchestrator | 2025-11-23 00:42:57 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:42:57.549753 | orchestrator | 2025-11-23 00:42:57 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:42:57.549967 | orchestrator | 2025-11-23 00:42:57 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:42:57.551241 | orchestrator | 2025-11-23 00:42:57 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:42:57.551261 | orchestrator | 2025-11-23 00:42:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:00.576381 | orchestrator | 2025-11-23 00:43:00 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:00.576506 | orchestrator | 2025-11-23 00:43:00 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:00.577356 | orchestrator | 2025-11-23 00:43:00 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:43:00.578898 | orchestrator | 2025-11-23 00:43:00 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:00.579007 | orchestrator | 2025-11-23 00:43:00 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:43:00.579138 | orchestrator | 2025-11-23 00:43:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:03.622558 | orchestrator | 2025-11-23 00:43:03 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:03.625195 | orchestrator | 2025-11-23 00:43:03 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:03.625245 | orchestrator | 2025-11-23 00:43:03 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:43:03.625258 | orchestrator | 2025-11-23 00:43:03 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:03.625270 | orchestrator | 2025-11-23 00:43:03 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:43:03.625281 | orchestrator | 2025-11-23 00:43:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:06.656289 | orchestrator | 2025-11-23 00:43:06 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:06.657306 | orchestrator | 2025-11-23 00:43:06 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:06.658662 | orchestrator | 2025-11-23 00:43:06 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:43:06.660123 | orchestrator | 2025-11-23 00:43:06 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:06.661466 | orchestrator | 2025-11-23 00:43:06 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:43:06.661825 | orchestrator | 2025-11-23 00:43:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:09.786341 | orchestrator | 2025-11-23 00:43:09 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:09.786391 | orchestrator | 2025-11-23 00:43:09 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:09.789415 | orchestrator | 2025-11-23 00:43:09 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:43:09.789922 | orchestrator | 2025-11-23 00:43:09 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:09.790655 | orchestrator | 2025-11-23 00:43:09 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state STARTED 2025-11-23 00:43:09.790670 | orchestrator | 2025-11-23 00:43:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:12.846691 | orchestrator | 2025-11-23 00:43:12 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:12.848379 | orchestrator | 2025-11-23 00:43:12 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:12.850479 | orchestrator | 2025-11-23 00:43:12 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:43:12.851939 | orchestrator | 2025-11-23 00:43:12 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:12.853130 | orchestrator | 2025-11-23 00:43:12 | INFO  | Task 1067d2ce-89f8-4b4e-9bbb-abea6e177bb1 is in state SUCCESS 2025-11-23 00:43:12.855548 | orchestrator | 2025-11-23 00:43:12.855648 | orchestrator | 2025-11-23 00:43:12.855670 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-11-23 00:43:12.855690 | orchestrator | 2025-11-23 00:43:12.855710 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-11-23 00:43:12.855767 | orchestrator | Sunday 23 November 2025 00:41:54 +0000 (0:00:00.532) 0:00:00.532 ******* 2025-11-23 00:43:12.855786 | orchestrator | ok: [testbed-manager] => { 2025-11-23 00:43:12.855806 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-11-23 00:43:12.855825 | orchestrator | } 2025-11-23 00:43:12.855844 | orchestrator | 2025-11-23 00:43:12.855863 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-11-23 00:43:12.855918 | orchestrator | Sunday 23 November 2025 00:41:55 +0000 (0:00:00.522) 0:00:01.054 ******* 2025-11-23 00:43:12.855937 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.855956 | orchestrator | 2025-11-23 00:43:12.855974 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-11-23 00:43:12.855992 | orchestrator | Sunday 23 November 2025 00:41:56 +0000 (0:00:01.407) 0:00:02.462 ******* 2025-11-23 00:43:12.856010 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-11-23 00:43:12.856028 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-11-23 00:43:12.856047 | orchestrator | 2025-11-23 00:43:12.856066 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-11-23 00:43:12.856084 | orchestrator | Sunday 23 November 2025 00:41:58 +0000 (0:00:01.845) 0:00:04.307 ******* 2025-11-23 00:43:12.856104 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.856124 | orchestrator | 2025-11-23 00:43:12.856167 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-11-23 00:43:12.856188 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:03.174) 0:00:07.482 ******* 2025-11-23 00:43:12.856207 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.856226 | orchestrator | 2025-11-23 00:43:12.856245 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-11-23 00:43:12.856263 | orchestrator | Sunday 23 November 2025 00:42:02 +0000 (0:00:01.027) 0:00:08.510 ******* 2025-11-23 00:43:12.856282 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-11-23 00:43:12.856301 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.856322 | orchestrator | 2025-11-23 00:43:12.856342 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-11-23 00:43:12.856362 | orchestrator | Sunday 23 November 2025 00:42:28 +0000 (0:00:26.334) 0:00:34.844 ******* 2025-11-23 00:43:12.856381 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.856401 | orchestrator | 2025-11-23 00:43:12.856421 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:43:12.856440 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.856460 | orchestrator | 2025-11-23 00:43:12.856479 | orchestrator | 2025-11-23 00:43:12.856498 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:43:12.856516 | orchestrator | Sunday 23 November 2025 00:42:30 +0000 (0:00:02.137) 0:00:36.981 ******* 2025-11-23 00:43:12.856534 | orchestrator | =============================================================================== 2025-11-23 00:43:12.856552 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.33s 2025-11-23 00:43:12.856570 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.17s 2025-11-23 00:43:12.856588 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.14s 2025-11-23 00:43:12.856630 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.85s 2025-11-23 00:43:12.856649 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.41s 2025-11-23 00:43:12.856667 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.03s 2025-11-23 00:43:12.856685 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.52s 2025-11-23 00:43:12.856703 | orchestrator | 2025-11-23 00:43:12.856722 | orchestrator | 2025-11-23 00:43:12.856740 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-11-23 00:43:12.856758 | orchestrator | 2025-11-23 00:43:12.856776 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-11-23 00:43:12.856793 | orchestrator | Sunday 23 November 2025 00:41:53 +0000 (0:00:00.696) 0:00:00.696 ******* 2025-11-23 00:43:12.856811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-11-23 00:43:12.856830 | orchestrator | 2025-11-23 00:43:12.856847 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-11-23 00:43:12.856865 | orchestrator | Sunday 23 November 2025 00:41:53 +0000 (0:00:00.656) 0:00:01.352 ******* 2025-11-23 00:43:12.856883 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-11-23 00:43:12.856901 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-11-23 00:43:12.856919 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-11-23 00:43:12.856937 | orchestrator | 2025-11-23 00:43:12.856955 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-11-23 00:43:12.856974 | orchestrator | Sunday 23 November 2025 00:41:56 +0000 (0:00:02.339) 0:00:03.691 ******* 2025-11-23 00:43:12.856992 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.857010 | orchestrator | 2025-11-23 00:43:12.857028 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-11-23 00:43:12.857061 | orchestrator | Sunday 23 November 2025 00:41:58 +0000 (0:00:02.529) 0:00:06.221 ******* 2025-11-23 00:43:12.857098 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-11-23 00:43:12.857117 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.857133 | orchestrator | 2025-11-23 00:43:12.857151 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-11-23 00:43:12.857170 | orchestrator | Sunday 23 November 2025 00:42:32 +0000 (0:00:33.638) 0:00:39.860 ******* 2025-11-23 00:43:12.857189 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.857206 | orchestrator | 2025-11-23 00:43:12.857224 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-11-23 00:43:12.857244 | orchestrator | Sunday 23 November 2025 00:42:33 +0000 (0:00:01.513) 0:00:41.373 ******* 2025-11-23 00:43:12.857263 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.857281 | orchestrator | 2025-11-23 00:43:12.857299 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-11-23 00:43:12.857325 | orchestrator | Sunday 23 November 2025 00:42:34 +0000 (0:00:00.715) 0:00:42.089 ******* 2025-11-23 00:43:12.857343 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.857362 | orchestrator | 2025-11-23 00:43:12.857380 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-11-23 00:43:12.857398 | orchestrator | Sunday 23 November 2025 00:42:36 +0000 (0:00:02.165) 0:00:44.254 ******* 2025-11-23 00:43:12.857415 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.857434 | orchestrator | 2025-11-23 00:43:12.857452 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-11-23 00:43:12.857470 | orchestrator | Sunday 23 November 2025 00:42:37 +0000 (0:00:00.710) 0:00:44.965 ******* 2025-11-23 00:43:12.857488 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.857506 | orchestrator | 2025-11-23 00:43:12.857523 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-11-23 00:43:12.857541 | orchestrator | Sunday 23 November 2025 00:42:38 +0000 (0:00:00.851) 0:00:45.817 ******* 2025-11-23 00:43:12.857559 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.857578 | orchestrator | 2025-11-23 00:43:12.857616 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:43:12.857636 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.857655 | orchestrator | 2025-11-23 00:43:12.857673 | orchestrator | 2025-11-23 00:43:12.857692 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:43:12.857711 | orchestrator | Sunday 23 November 2025 00:42:38 +0000 (0:00:00.353) 0:00:46.170 ******* 2025-11-23 00:43:12.857729 | orchestrator | =============================================================================== 2025-11-23 00:43:12.857748 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.64s 2025-11-23 00:43:12.857766 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.53s 2025-11-23 00:43:12.857784 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.34s 2025-11-23 00:43:12.857803 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.17s 2025-11-23 00:43:12.857821 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.51s 2025-11-23 00:43:12.857839 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.85s 2025-11-23 00:43:12.857857 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.72s 2025-11-23 00:43:12.857876 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.71s 2025-11-23 00:43:12.857893 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.66s 2025-11-23 00:43:12.857911 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2025-11-23 00:43:12.857930 | orchestrator | 2025-11-23 00:43:12.857960 | orchestrator | 2025-11-23 00:43:12.857978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:43:12.857997 | orchestrator | 2025-11-23 00:43:12.858080 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:43:12.858102 | orchestrator | Sunday 23 November 2025 00:41:52 +0000 (0:00:00.679) 0:00:00.679 ******* 2025-11-23 00:43:12.858121 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-11-23 00:43:12.858139 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-11-23 00:43:12.858158 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-11-23 00:43:12.858176 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-11-23 00:43:12.858194 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-11-23 00:43:12.858213 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-11-23 00:43:12.858231 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-11-23 00:43:12.858249 | orchestrator | 2025-11-23 00:43:12.858267 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-11-23 00:43:12.858285 | orchestrator | 2025-11-23 00:43:12.858303 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-11-23 00:43:12.858321 | orchestrator | Sunday 23 November 2025 00:41:55 +0000 (0:00:02.222) 0:00:02.902 ******* 2025-11-23 00:43:12.858357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:43:12.858379 | orchestrator | 2025-11-23 00:43:12.858398 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-11-23 00:43:12.858416 | orchestrator | Sunday 23 November 2025 00:41:56 +0000 (0:00:01.806) 0:00:04.708 ******* 2025-11-23 00:43:12.858435 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:43:12.858454 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:43:12.858474 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:43:12.858492 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:43:12.858510 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:43:12.858542 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:43:12.858562 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.858580 | orchestrator | 2025-11-23 00:43:12.858669 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-11-23 00:43:12.858689 | orchestrator | Sunday 23 November 2025 00:41:58 +0000 (0:00:01.972) 0:00:06.681 ******* 2025-11-23 00:43:12.858707 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:43:12.858725 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:43:12.858743 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:43:12.858762 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:43:12.858779 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:43:12.858797 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:43:12.858816 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.858834 | orchestrator | 2025-11-23 00:43:12.858853 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-11-23 00:43:12.858879 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:02.946) 0:00:09.628 ******* 2025-11-23 00:43:12.858896 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:43:12.858915 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:43:12.858934 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:43:12.858952 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:43:12.858970 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.858989 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:43:12.859006 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:43:12.859022 | orchestrator | 2025-11-23 00:43:12.859037 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-11-23 00:43:12.859054 | orchestrator | Sunday 23 November 2025 00:42:04 +0000 (0:00:02.808) 0:00:12.437 ******* 2025-11-23 00:43:12.859070 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:43:12.859097 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:43:12.859113 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:43:12.859129 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:43:12.859145 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:43:12.859160 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:43:12.859176 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.859192 | orchestrator | 2025-11-23 00:43:12.859208 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-11-23 00:43:12.859225 | orchestrator | Sunday 23 November 2025 00:42:16 +0000 (0:00:12.114) 0:00:24.551 ******* 2025-11-23 00:43:12.859241 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:43:12.859257 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:43:12.859273 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:43:12.859289 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:43:12.859305 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:43:12.859320 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:43:12.859337 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.859352 | orchestrator | 2025-11-23 00:43:12.859369 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-11-23 00:43:12.859385 | orchestrator | Sunday 23 November 2025 00:42:52 +0000 (0:00:35.609) 0:01:00.161 ******* 2025-11-23 00:43:12.859402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:43:12.859420 | orchestrator | 2025-11-23 00:43:12.859436 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-11-23 00:43:12.859452 | orchestrator | Sunday 23 November 2025 00:42:53 +0000 (0:00:01.250) 0:01:01.412 ******* 2025-11-23 00:43:12.859468 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-11-23 00:43:12.859484 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-11-23 00:43:12.859500 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-11-23 00:43:12.859516 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-11-23 00:43:12.859532 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-11-23 00:43:12.859548 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-11-23 00:43:12.859564 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-11-23 00:43:12.859580 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-11-23 00:43:12.859628 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-11-23 00:43:12.859645 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-11-23 00:43:12.859661 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-11-23 00:43:12.859678 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-11-23 00:43:12.859693 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-11-23 00:43:12.859709 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-11-23 00:43:12.859725 | orchestrator | 2025-11-23 00:43:12.859742 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-11-23 00:43:12.859759 | orchestrator | Sunday 23 November 2025 00:42:58 +0000 (0:00:04.754) 0:01:06.166 ******* 2025-11-23 00:43:12.859775 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.859791 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:43:12.859807 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:43:12.859822 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:43:12.859839 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:43:12.859855 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:43:12.859872 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:43:12.859888 | orchestrator | 2025-11-23 00:43:12.859905 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-11-23 00:43:12.859921 | orchestrator | Sunday 23 November 2025 00:42:59 +0000 (0:00:01.093) 0:01:07.259 ******* 2025-11-23 00:43:12.859949 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.859967 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:43:12.859984 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:43:12.860000 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:43:12.860016 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:43:12.860032 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:43:12.860050 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:43:12.860066 | orchestrator | 2025-11-23 00:43:12.860081 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-11-23 00:43:12.860110 | orchestrator | Sunday 23 November 2025 00:43:00 +0000 (0:00:01.491) 0:01:08.751 ******* 2025-11-23 00:43:12.860127 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.860144 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:43:12.860160 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:43:12.860176 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:43:12.860193 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:43:12.860208 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:43:12.860224 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:43:12.860240 | orchestrator | 2025-11-23 00:43:12.860256 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-11-23 00:43:12.860272 | orchestrator | Sunday 23 November 2025 00:43:02 +0000 (0:00:01.305) 0:01:10.057 ******* 2025-11-23 00:43:12.860288 | orchestrator | ok: [testbed-manager] 2025-11-23 00:43:12.860304 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:43:12.860320 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:43:12.860338 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:43:12.860354 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:43:12.860384 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:43:12.860401 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:43:12.860417 | orchestrator | 2025-11-23 00:43:12.860433 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-11-23 00:43:12.860448 | orchestrator | Sunday 23 November 2025 00:43:03 +0000 (0:00:01.678) 0:01:11.735 ******* 2025-11-23 00:43:12.860464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-11-23 00:43:12.860482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:43:12.860498 | orchestrator | 2025-11-23 00:43:12.860514 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-11-23 00:43:12.860530 | orchestrator | Sunday 23 November 2025 00:43:05 +0000 (0:00:01.263) 0:01:12.999 ******* 2025-11-23 00:43:12.860546 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.860562 | orchestrator | 2025-11-23 00:43:12.860576 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-11-23 00:43:12.860613 | orchestrator | Sunday 23 November 2025 00:43:07 +0000 (0:00:01.873) 0:01:14.872 ******* 2025-11-23 00:43:12.860630 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:43:12.860646 | orchestrator | changed: [testbed-manager] 2025-11-23 00:43:12.860662 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:43:12.860679 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:43:12.860694 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:43:12.860711 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:43:12.860727 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:43:12.860744 | orchestrator | 2025-11-23 00:43:12.860759 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:43:12.860775 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.860793 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.860809 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.860840 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.860857 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.860874 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.860891 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:43:12.860907 | orchestrator | 2025-11-23 00:43:12.860923 | orchestrator | 2025-11-23 00:43:12.860940 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:43:12.860956 | orchestrator | Sunday 23 November 2025 00:43:10 +0000 (0:00:03.503) 0:01:18.375 ******* 2025-11-23 00:43:12.860974 | orchestrator | =============================================================================== 2025-11-23 00:43:12.860990 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 35.61s 2025-11-23 00:43:12.861006 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.11s 2025-11-23 00:43:12.861022 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.75s 2025-11-23 00:43:12.861039 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.50s 2025-11-23 00:43:12.861057 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.95s 2025-11-23 00:43:12.861074 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.81s 2025-11-23 00:43:12.861091 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.22s 2025-11-23 00:43:12.861108 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.97s 2025-11-23 00:43:12.861124 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.87s 2025-11-23 00:43:12.861140 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.81s 2025-11-23 00:43:12.861156 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.68s 2025-11-23 00:43:12.861186 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.49s 2025-11-23 00:43:12.861202 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.31s 2025-11-23 00:43:12.861219 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.26s 2025-11-23 00:43:12.861236 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.25s 2025-11-23 00:43:12.861252 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.09s 2025-11-23 00:43:12.861269 | orchestrator | 2025-11-23 00:43:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:15.898310 | orchestrator | 2025-11-23 00:43:15 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:15.898420 | orchestrator | 2025-11-23 00:43:15 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:15.898445 | orchestrator | 2025-11-23 00:43:15 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state STARTED 2025-11-23 00:43:15.898465 | orchestrator | 2025-11-23 00:43:15 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:15.898482 | orchestrator | 2025-11-23 00:43:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:18.936963 | orchestrator | 2025-11-23 00:43:18 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:18.938883 | orchestrator | 2025-11-23 00:43:18 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:18.939744 | orchestrator | 2025-11-23 00:43:18 | INFO  | Task a9ae8bf0-d359-435d-9e15-4d9960f10441 is in state SUCCESS 2025-11-23 00:43:18.941364 | orchestrator | 2025-11-23 00:43:18 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:18.941774 | orchestrator | 2025-11-23 00:43:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:21.988104 | orchestrator | 2025-11-23 00:43:21 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:21.989880 | orchestrator | 2025-11-23 00:43:21 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:21.992082 | orchestrator | 2025-11-23 00:43:21 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:21.993313 | orchestrator | 2025-11-23 00:43:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:25.032310 | orchestrator | 2025-11-23 00:43:25 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:25.034925 | orchestrator | 2025-11-23 00:43:25 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:25.036846 | orchestrator | 2025-11-23 00:43:25 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:25.036871 | orchestrator | 2025-11-23 00:43:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:28.074318 | orchestrator | 2025-11-23 00:43:28 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:28.076955 | orchestrator | 2025-11-23 00:43:28 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:28.077001 | orchestrator | 2025-11-23 00:43:28 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:28.077014 | orchestrator | 2025-11-23 00:43:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:31.121932 | orchestrator | 2025-11-23 00:43:31 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:31.125110 | orchestrator | 2025-11-23 00:43:31 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:31.128886 | orchestrator | 2025-11-23 00:43:31 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:31.128928 | orchestrator | 2025-11-23 00:43:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:34.163455 | orchestrator | 2025-11-23 00:43:34 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:34.163961 | orchestrator | 2025-11-23 00:43:34 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:34.165212 | orchestrator | 2025-11-23 00:43:34 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:34.165230 | orchestrator | 2025-11-23 00:43:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:37.196245 | orchestrator | 2025-11-23 00:43:37 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:37.197105 | orchestrator | 2025-11-23 00:43:37 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:37.198660 | orchestrator | 2025-11-23 00:43:37 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:37.198825 | orchestrator | 2025-11-23 00:43:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:40.233877 | orchestrator | 2025-11-23 00:43:40 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:40.236116 | orchestrator | 2025-11-23 00:43:40 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:40.236345 | orchestrator | 2025-11-23 00:43:40 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:40.236372 | orchestrator | 2025-11-23 00:43:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:43.279035 | orchestrator | 2025-11-23 00:43:43 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:43.281056 | orchestrator | 2025-11-23 00:43:43 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:43.283446 | orchestrator | 2025-11-23 00:43:43 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:43.283481 | orchestrator | 2025-11-23 00:43:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:46.345131 | orchestrator | 2025-11-23 00:43:46 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:46.346994 | orchestrator | 2025-11-23 00:43:46 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:46.349553 | orchestrator | 2025-11-23 00:43:46 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:46.350316 | orchestrator | 2025-11-23 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:49.383093 | orchestrator | 2025-11-23 00:43:49 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:49.384250 | orchestrator | 2025-11-23 00:43:49 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:49.385067 | orchestrator | 2025-11-23 00:43:49 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:49.385102 | orchestrator | 2025-11-23 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:52.418727 | orchestrator | 2025-11-23 00:43:52 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:52.418861 | orchestrator | 2025-11-23 00:43:52 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:52.418880 | orchestrator | 2025-11-23 00:43:52 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:52.418889 | orchestrator | 2025-11-23 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:55.446864 | orchestrator | 2025-11-23 00:43:55 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:55.446989 | orchestrator | 2025-11-23 00:43:55 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:55.448405 | orchestrator | 2025-11-23 00:43:55 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:55.448592 | orchestrator | 2025-11-23 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:43:58.490799 | orchestrator | 2025-11-23 00:43:58 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:43:58.493217 | orchestrator | 2025-11-23 00:43:58 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:43:58.499236 | orchestrator | 2025-11-23 00:43:58 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:43:58.499291 | orchestrator | 2025-11-23 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:01.529226 | orchestrator | 2025-11-23 00:44:01 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:01.531205 | orchestrator | 2025-11-23 00:44:01 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:01.532870 | orchestrator | 2025-11-23 00:44:01 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:44:01.532933 | orchestrator | 2025-11-23 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:04.563118 | orchestrator | 2025-11-23 00:44:04 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:04.564121 | orchestrator | 2025-11-23 00:44:04 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:04.564852 | orchestrator | 2025-11-23 00:44:04 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:44:04.565261 | orchestrator | 2025-11-23 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:07.604259 | orchestrator | 2025-11-23 00:44:07 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:07.605278 | orchestrator | 2025-11-23 00:44:07 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:07.607085 | orchestrator | 2025-11-23 00:44:07 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:44:07.607128 | orchestrator | 2025-11-23 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:10.632602 | orchestrator | 2025-11-23 00:44:10 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:10.632719 | orchestrator | 2025-11-23 00:44:10 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:10.633225 | orchestrator | 2025-11-23 00:44:10 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:44:10.633246 | orchestrator | 2025-11-23 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:13.661455 | orchestrator | 2025-11-23 00:44:13 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:13.663332 | orchestrator | 2025-11-23 00:44:13 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:13.664835 | orchestrator | 2025-11-23 00:44:13 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:44:13.664983 | orchestrator | 2025-11-23 00:44:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:16.702251 | orchestrator | 2025-11-23 00:44:16 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:16.703764 | orchestrator | 2025-11-23 00:44:16 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:16.705476 | orchestrator | 2025-11-23 00:44:16 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:44:16.705728 | orchestrator | 2025-11-23 00:44:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:19.737726 | orchestrator | 2025-11-23 00:44:19 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:19.739813 | orchestrator | 2025-11-23 00:44:19 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:19.741929 | orchestrator | 2025-11-23 00:44:19 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state STARTED 2025-11-23 00:44:19.742685 | orchestrator | 2025-11-23 00:44:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:22.797386 | orchestrator | 2025-11-23 00:44:22 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:22.797501 | orchestrator | 2025-11-23 00:44:22 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:22.804745 | orchestrator | 2025-11-23 00:44:22 | INFO  | Task 53f3b1ce-930c-4390-bceb-0b9b518ffb45 is in state SUCCESS 2025-11-23 00:44:22.806449 | orchestrator | 2025-11-23 00:44:22.806480 | orchestrator | 2025-11-23 00:44:22.806545 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-11-23 00:44:22.806582 | orchestrator | 2025-11-23 00:44:22.806660 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-11-23 00:44:22.806667 | orchestrator | Sunday 23 November 2025 00:42:10 +0000 (0:00:00.752) 0:00:00.752 ******* 2025-11-23 00:44:22.806673 | orchestrator | ok: [testbed-manager] 2025-11-23 00:44:22.806680 | orchestrator | 2025-11-23 00:44:22.806685 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-11-23 00:44:22.806691 | orchestrator | Sunday 23 November 2025 00:42:12 +0000 (0:00:01.832) 0:00:02.584 ******* 2025-11-23 00:44:22.806697 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-11-23 00:44:22.806702 | orchestrator | 2025-11-23 00:44:22.806708 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-11-23 00:44:22.806713 | orchestrator | Sunday 23 November 2025 00:42:12 +0000 (0:00:00.902) 0:00:03.487 ******* 2025-11-23 00:44:22.806719 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.806724 | orchestrator | 2025-11-23 00:44:22.806730 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-11-23 00:44:22.806735 | orchestrator | Sunday 23 November 2025 00:42:14 +0000 (0:00:01.102) 0:00:04.590 ******* 2025-11-23 00:44:22.806740 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-11-23 00:44:22.806746 | orchestrator | ok: [testbed-manager] 2025-11-23 00:44:22.806751 | orchestrator | 2025-11-23 00:44:22.806756 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-11-23 00:44:22.806762 | orchestrator | Sunday 23 November 2025 00:43:10 +0000 (0:00:56.158) 0:01:00.748 ******* 2025-11-23 00:44:22.806767 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.806772 | orchestrator | 2025-11-23 00:44:22.806778 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:44:22.806783 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:44:22.806790 | orchestrator | 2025-11-23 00:44:22.806796 | orchestrator | 2025-11-23 00:44:22.806801 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:44:22.806807 | orchestrator | Sunday 23 November 2025 00:43:15 +0000 (0:00:05.532) 0:01:06.281 ******* 2025-11-23 00:44:22.806812 | orchestrator | =============================================================================== 2025-11-23 00:44:22.806817 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.16s 2025-11-23 00:44:22.806828 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.53s 2025-11-23 00:44:22.806833 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.83s 2025-11-23 00:44:22.806839 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.10s 2025-11-23 00:44:22.806844 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.90s 2025-11-23 00:44:22.806849 | orchestrator | 2025-11-23 00:44:22.806855 | orchestrator | 2025-11-23 00:44:22.806860 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-11-23 00:44:22.806865 | orchestrator | 2025-11-23 00:44:22.806870 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-23 00:44:22.806876 | orchestrator | Sunday 23 November 2025 00:41:46 +0000 (0:00:00.213) 0:00:00.213 ******* 2025-11-23 00:44:22.806881 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:44:22.806888 | orchestrator | 2025-11-23 00:44:22.806896 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-11-23 00:44:22.806905 | orchestrator | Sunday 23 November 2025 00:41:47 +0000 (0:00:01.155) 0:00:01.368 ******* 2025-11-23 00:44:22.806914 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-23 00:44:22.806936 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-23 00:44:22.807573 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-23 00:44:22.807585 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-23 00:44:22.807591 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-23 00:44:22.807597 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-23 00:44:22.807603 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-23 00:44:22.807749 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-23 00:44:22.807757 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-23 00:44:22.807762 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-23 00:44:22.807768 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-23 00:44:22.807774 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-23 00:44:22.807781 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-23 00:44:22.807787 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-23 00:44:22.807793 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-23 00:44:22.807799 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-23 00:44:22.807827 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-23 00:44:22.807834 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-23 00:44:22.807840 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-23 00:44:22.807846 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-23 00:44:22.807851 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-23 00:44:22.807857 | orchestrator | 2025-11-23 00:44:22.807863 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-23 00:44:22.807869 | orchestrator | Sunday 23 November 2025 00:41:50 +0000 (0:00:03.584) 0:00:04.953 ******* 2025-11-23 00:44:22.807874 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:44:22.807881 | orchestrator | 2025-11-23 00:44:22.807887 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-11-23 00:44:22.807893 | orchestrator | Sunday 23 November 2025 00:41:52 +0000 (0:00:01.241) 0:00:06.194 ******* 2025-11-23 00:44:22.807902 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.807915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.807929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.807935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.807941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.807947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.807970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.807978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.807986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.807997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808030 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808091 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808103 | orchestrator | 2025-11-23 00:44:22.808109 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-11-23 00:44:22.808115 | orchestrator | Sunday 23 November 2025 00:41:57 +0000 (0:00:04.843) 0:00:11.038 ******* 2025-11-23 00:44:22.808140 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808166 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808179 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808185 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:44:22.808191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808215 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:44:22.808221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808245 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:44:22.808251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808295 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:44:22.808301 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:44:22.808306 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:44:22.808312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808335 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:44:22.808341 | orchestrator | 2025-11-23 00:44:22.808347 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-11-23 00:44:22.808353 | orchestrator | Sunday 23 November 2025 00:41:58 +0000 (0:00:01.305) 0:00:12.343 ******* 2025-11-23 00:44:22.808359 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808365 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808374 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808408 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:44:22.808415 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:44:22.808422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808472 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:44:22.808479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808502 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:44:22.808508 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:44:22.808515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808547 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:44:22.808554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-23 00:44:22.808561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.808575 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:44:22.808581 | orchestrator | 2025-11-23 00:44:22.808590 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-11-23 00:44:22.808597 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:02.780) 0:00:15.124 ******* 2025-11-23 00:44:22.808603 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:44:22.808640 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:44:22.808647 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:44:22.808654 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:44:22.808661 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:44:22.808667 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:44:22.808673 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:44:22.808680 | orchestrator | 2025-11-23 00:44:22.808686 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-11-23 00:44:22.808693 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:00.837) 0:00:15.961 ******* 2025-11-23 00:44:22.808699 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:44:22.808706 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:44:22.808712 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:44:22.808719 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:44:22.808725 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:44:22.808731 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:44:22.808738 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:44:22.808744 | orchestrator | 2025-11-23 00:44:22.808750 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-11-23 00:44:22.808756 | orchestrator | Sunday 23 November 2025 00:42:03 +0000 (0:00:01.469) 0:00:17.431 ******* 2025-11-23 00:44:22.808762 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808795 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808816 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.808825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808831 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808903 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.808920 | orchestrator | 2025-11-23 00:44:22.808926 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-11-23 00:44:22.808932 | orchestrator | Sunday 23 November 2025 00:42:09 +0000 (0:00:06.399) 0:00:23.830 ******* 2025-11-23 00:44:22.808938 | orchestrator | [WARNING]: Skipped 2025-11-23 00:44:22.808944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-11-23 00:44:22.808950 | orchestrator | to this access issue: 2025-11-23 00:44:22.808956 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-11-23 00:44:22.808962 | orchestrator | directory 2025-11-23 00:44:22.808968 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:44:22.808974 | orchestrator | 2025-11-23 00:44:22.808982 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-11-23 00:44:22.808988 | orchestrator | Sunday 23 November 2025 00:42:11 +0000 (0:00:01.485) 0:00:25.315 ******* 2025-11-23 00:44:22.808993 | orchestrator | [WARNING]: Skipped 2025-11-23 00:44:22.809003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-11-23 00:44:22.809008 | orchestrator | to this access issue: 2025-11-23 00:44:22.809014 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-11-23 00:44:22.809020 | orchestrator | directory 2025-11-23 00:44:22.809026 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:44:22.809032 | orchestrator | 2025-11-23 00:44:22.809037 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-11-23 00:44:22.809043 | orchestrator | Sunday 23 November 2025 00:42:12 +0000 (0:00:01.080) 0:00:26.395 ******* 2025-11-23 00:44:22.809049 | orchestrator | [WARNING]: Skipped 2025-11-23 00:44:22.809055 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-11-23 00:44:22.809060 | orchestrator | to this access issue: 2025-11-23 00:44:22.809066 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-11-23 00:44:22.809072 | orchestrator | directory 2025-11-23 00:44:22.809077 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:44:22.809083 | orchestrator | 2025-11-23 00:44:22.809089 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-11-23 00:44:22.809094 | orchestrator | Sunday 23 November 2025 00:42:13 +0000 (0:00:00.687) 0:00:27.083 ******* 2025-11-23 00:44:22.809100 | orchestrator | [WARNING]: Skipped 2025-11-23 00:44:22.809106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-11-23 00:44:22.809112 | orchestrator | to this access issue: 2025-11-23 00:44:22.809117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-11-23 00:44:22.809123 | orchestrator | directory 2025-11-23 00:44:22.809129 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:44:22.809134 | orchestrator | 2025-11-23 00:44:22.809140 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-11-23 00:44:22.809146 | orchestrator | Sunday 23 November 2025 00:42:13 +0000 (0:00:00.920) 0:00:28.004 ******* 2025-11-23 00:44:22.809151 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.809157 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:22.809163 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:22.809168 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:44:22.809174 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:22.809180 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:44:22.809185 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:44:22.809191 | orchestrator | 2025-11-23 00:44:22.809197 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-11-23 00:44:22.809202 | orchestrator | Sunday 23 November 2025 00:42:19 +0000 (0:00:05.230) 0:00:33.234 ******* 2025-11-23 00:44:22.809208 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-23 00:44:22.809214 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-23 00:44:22.809220 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-23 00:44:22.809229 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-23 00:44:22.809235 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-23 00:44:22.809241 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-23 00:44:22.809246 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-23 00:44:22.809252 | orchestrator | 2025-11-23 00:44:22.809258 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-11-23 00:44:22.809263 | orchestrator | Sunday 23 November 2025 00:42:22 +0000 (0:00:03.768) 0:00:37.003 ******* 2025-11-23 00:44:22.809274 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:22.809280 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:22.809285 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.809291 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:22.809297 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:44:22.809302 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:44:22.809308 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:44:22.809314 | orchestrator | 2025-11-23 00:44:22.809320 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-11-23 00:44:22.809325 | orchestrator | Sunday 23 November 2025 00:42:26 +0000 (0:00:03.100) 0:00:40.103 ******* 2025-11-23 00:44:22.809331 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809338 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.809344 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.809356 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809369 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.809385 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809393 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.809408 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.809420 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809429 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.809447 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:44:22.809462 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809468 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809474 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809480 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809485 | orchestrator | 2025-11-23 00:44:22.809491 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-11-23 00:44:22.809497 | orchestrator | Sunday 23 November 2025 00:42:28 +0000 (0:00:02.194) 0:00:42.298 ******* 2025-11-23 00:44:22.809507 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-23 00:44:22.809513 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-23 00:44:22.809518 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-23 00:44:22.809529 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-23 00:44:22.809535 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-23 00:44:22.809540 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-23 00:44:22.809546 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-23 00:44:22.809552 | orchestrator | 2025-11-23 00:44:22.809558 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-11-23 00:44:22.809563 | orchestrator | Sunday 23 November 2025 00:42:31 +0000 (0:00:03.526) 0:00:45.825 ******* 2025-11-23 00:44:22.809569 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-23 00:44:22.809575 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-23 00:44:22.809580 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-23 00:44:22.809586 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-23 00:44:22.809592 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-23 00:44:22.809597 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-23 00:44:22.809603 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-23 00:44:22.809652 | orchestrator | 2025-11-23 00:44:22.809662 | orchestrator | TASK [common : Check common containers] **************************************** 2025-11-23 00:44:22.809671 | orchestrator | Sunday 23 November 2025 00:42:34 +0000 (0:00:02.259) 0:00:48.085 ******* 2025-11-23 00:44:22.809683 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809713 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-23 00:44:22.809756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809766 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:44:22.809836 | orchestrator | 2025-11-23 00:44:22.809844 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-11-23 00:44:22.809850 | orchestrator | Sunday 23 November 2025 00:42:37 +0000 (0:00:03.387) 0:00:51.472 ******* 2025-11-23 00:44:22.809856 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.809862 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:22.809868 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:22.809874 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:22.809879 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:44:22.809885 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:44:22.809891 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:44:22.809896 | orchestrator | 2025-11-23 00:44:22.809902 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-11-23 00:44:22.809908 | orchestrator | Sunday 23 November 2025 00:42:39 +0000 (0:00:01.763) 0:00:53.235 ******* 2025-11-23 00:44:22.809914 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:22.809919 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.809925 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:22.809931 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:22.809936 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:44:22.809942 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:44:22.809948 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:44:22.809953 | orchestrator | 2025-11-23 00:44:22.809959 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-23 00:44:22.809965 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:01.116) 0:00:54.352 ******* 2025-11-23 00:44:22.809970 | orchestrator | 2025-11-23 00:44:22.809976 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-23 00:44:22.809982 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:00.070) 0:00:54.423 ******* 2025-11-23 00:44:22.809988 | orchestrator | 2025-11-23 00:44:22.809993 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-23 00:44:22.809999 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:00.059) 0:00:54.483 ******* 2025-11-23 00:44:22.810005 | orchestrator | 2025-11-23 00:44:22.810011 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-23 00:44:22.810052 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:00.058) 0:00:54.541 ******* 2025-11-23 00:44:22.810059 | orchestrator | 2025-11-23 00:44:22.810065 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-23 00:44:22.810076 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:00.164) 0:00:54.706 ******* 2025-11-23 00:44:22.810082 | orchestrator | 2025-11-23 00:44:22.810088 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-23 00:44:22.810095 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:00.058) 0:00:54.764 ******* 2025-11-23 00:44:22.810101 | orchestrator | 2025-11-23 00:44:22.810110 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-23 00:44:22.810116 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:00.059) 0:00:54.824 ******* 2025-11-23 00:44:22.810122 | orchestrator | 2025-11-23 00:44:22.810128 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-11-23 00:44:22.810135 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:00.080) 0:00:54.905 ******* 2025-11-23 00:44:22.810141 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:22.810147 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.810153 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:44:22.810159 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:22.810165 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:22.810171 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:44:22.810177 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:44:22.810183 | orchestrator | 2025-11-23 00:44:22.810190 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-11-23 00:44:22.810196 | orchestrator | Sunday 23 November 2025 00:43:21 +0000 (0:00:40.800) 0:01:35.705 ******* 2025-11-23 00:44:22.810202 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:22.810208 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:44:22.810214 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.810220 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:44:22.810226 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:44:22.810232 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:22.810238 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:22.810244 | orchestrator | 2025-11-23 00:44:22.810250 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-11-23 00:44:22.810257 | orchestrator | Sunday 23 November 2025 00:44:09 +0000 (0:00:48.154) 0:02:23.860 ******* 2025-11-23 00:44:22.810263 | orchestrator | ok: [testbed-manager] 2025-11-23 00:44:22.810269 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:44:22.810275 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:44:22.810281 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:44:22.810287 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:44:22.810293 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:44:22.810299 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:44:22.810305 | orchestrator | 2025-11-23 00:44:22.810311 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-11-23 00:44:22.810318 | orchestrator | Sunday 23 November 2025 00:44:11 +0000 (0:00:01.863) 0:02:25.724 ******* 2025-11-23 00:44:22.810324 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:22.810330 | orchestrator | changed: [testbed-manager] 2025-11-23 00:44:22.810336 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:22.810342 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:22.810348 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:44:22.810354 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:44:22.810360 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:44:22.810366 | orchestrator | 2025-11-23 00:44:22.810372 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:44:22.810379 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-23 00:44:22.810386 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-23 00:44:22.810397 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-23 00:44:22.810407 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-23 00:44:22.810414 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-23 00:44:22.810420 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-23 00:44:22.810426 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-23 00:44:22.810432 | orchestrator | 2025-11-23 00:44:22.810438 | orchestrator | 2025-11-23 00:44:22.810444 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:44:22.810451 | orchestrator | Sunday 23 November 2025 00:44:21 +0000 (0:00:09.780) 0:02:35.505 ******* 2025-11-23 00:44:22.810457 | orchestrator | =============================================================================== 2025-11-23 00:44:22.810463 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 48.15s 2025-11-23 00:44:22.810469 | orchestrator | common : Restart fluentd container ------------------------------------- 40.80s 2025-11-23 00:44:22.810475 | orchestrator | common : Restart cron container ----------------------------------------- 9.78s 2025-11-23 00:44:22.810481 | orchestrator | common : Copying over config.json files for services -------------------- 6.40s 2025-11-23 00:44:22.810487 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.23s 2025-11-23 00:44:22.810493 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.84s 2025-11-23 00:44:22.810499 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.77s 2025-11-23 00:44:22.810505 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.58s 2025-11-23 00:44:22.810511 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.53s 2025-11-23 00:44:22.810517 | orchestrator | common : Check common containers ---------------------------------------- 3.39s 2025-11-23 00:44:22.810526 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.10s 2025-11-23 00:44:22.810532 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.78s 2025-11-23 00:44:22.810538 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.26s 2025-11-23 00:44:22.810544 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.20s 2025-11-23 00:44:22.810550 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.86s 2025-11-23 00:44:22.810556 | orchestrator | common : Creating log volume -------------------------------------------- 1.76s 2025-11-23 00:44:22.810562 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.49s 2025-11-23 00:44:22.810568 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.47s 2025-11-23 00:44:22.810575 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.31s 2025-11-23 00:44:22.810581 | orchestrator | common : include_tasks -------------------------------------------------- 1.24s 2025-11-23 00:44:22.810587 | orchestrator | 2025-11-23 00:44:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:25.860248 | orchestrator | 2025-11-23 00:44:25 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:25.860324 | orchestrator | 2025-11-23 00:44:25 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:25.860332 | orchestrator | 2025-11-23 00:44:25 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:25.861449 | orchestrator | 2025-11-23 00:44:25 | INFO  | Task 410d8e5d-67ac-4555-a239-18d6793d205c is in state STARTED 2025-11-23 00:44:25.865305 | orchestrator | 2025-11-23 00:44:25 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:25.865331 | orchestrator | 2025-11-23 00:44:25 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:25.865337 | orchestrator | 2025-11-23 00:44:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:28.894319 | orchestrator | 2025-11-23 00:44:28 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:28.894426 | orchestrator | 2025-11-23 00:44:28 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:28.896115 | orchestrator | 2025-11-23 00:44:28 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:28.897074 | orchestrator | 2025-11-23 00:44:28 | INFO  | Task 410d8e5d-67ac-4555-a239-18d6793d205c is in state STARTED 2025-11-23 00:44:28.897759 | orchestrator | 2025-11-23 00:44:28 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:28.898664 | orchestrator | 2025-11-23 00:44:28 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:28.898690 | orchestrator | 2025-11-23 00:44:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:31.925428 | orchestrator | 2025-11-23 00:44:31 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:31.925536 | orchestrator | 2025-11-23 00:44:31 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:31.925883 | orchestrator | 2025-11-23 00:44:31 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:31.926726 | orchestrator | 2025-11-23 00:44:31 | INFO  | Task 410d8e5d-67ac-4555-a239-18d6793d205c is in state STARTED 2025-11-23 00:44:31.927426 | orchestrator | 2025-11-23 00:44:31 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:31.930262 | orchestrator | 2025-11-23 00:44:31 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:31.930317 | orchestrator | 2025-11-23 00:44:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:34.971703 | orchestrator | 2025-11-23 00:44:34 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:34.972457 | orchestrator | 2025-11-23 00:44:34 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:34.973588 | orchestrator | 2025-11-23 00:44:34 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:34.973644 | orchestrator | 2025-11-23 00:44:34 | INFO  | Task 410d8e5d-67ac-4555-a239-18d6793d205c is in state STARTED 2025-11-23 00:44:34.974958 | orchestrator | 2025-11-23 00:44:34 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:34.976178 | orchestrator | 2025-11-23 00:44:34 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:34.976252 | orchestrator | 2025-11-23 00:44:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:38.016745 | orchestrator | 2025-11-23 00:44:38 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:38.016845 | orchestrator | 2025-11-23 00:44:38 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:38.016862 | orchestrator | 2025-11-23 00:44:38 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:38.018814 | orchestrator | 2025-11-23 00:44:38 | INFO  | Task 410d8e5d-67ac-4555-a239-18d6793d205c is in state STARTED 2025-11-23 00:44:38.020427 | orchestrator | 2025-11-23 00:44:38 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:38.024123 | orchestrator | 2025-11-23 00:44:38 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:38.024171 | orchestrator | 2025-11-23 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:41.058591 | orchestrator | 2025-11-23 00:44:41 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:41.060276 | orchestrator | 2025-11-23 00:44:41 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:41.061958 | orchestrator | 2025-11-23 00:44:41 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:41.065046 | orchestrator | 2025-11-23 00:44:41 | INFO  | Task 410d8e5d-67ac-4555-a239-18d6793d205c is in state SUCCESS 2025-11-23 00:44:41.066284 | orchestrator | 2025-11-23 00:44:41 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:41.068072 | orchestrator | 2025-11-23 00:44:41 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:41.069246 | orchestrator | 2025-11-23 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:44.120190 | orchestrator | 2025-11-23 00:44:44 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:44.121654 | orchestrator | 2025-11-23 00:44:44 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:44.123799 | orchestrator | 2025-11-23 00:44:44 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:44:44.125949 | orchestrator | 2025-11-23 00:44:44 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:44.126965 | orchestrator | 2025-11-23 00:44:44 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:44.128559 | orchestrator | 2025-11-23 00:44:44 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:44.129295 | orchestrator | 2025-11-23 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:47.163721 | orchestrator | 2025-11-23 00:44:47 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:47.166557 | orchestrator | 2025-11-23 00:44:47 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:47.170984 | orchestrator | 2025-11-23 00:44:47 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:44:47.171050 | orchestrator | 2025-11-23 00:44:47 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:47.171063 | orchestrator | 2025-11-23 00:44:47 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:47.172510 | orchestrator | 2025-11-23 00:44:47 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:47.172923 | orchestrator | 2025-11-23 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:50.203571 | orchestrator | 2025-11-23 00:44:50 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:50.206567 | orchestrator | 2025-11-23 00:44:50 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:50.208750 | orchestrator | 2025-11-23 00:44:50 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:44:50.210806 | orchestrator | 2025-11-23 00:44:50 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:50.213332 | orchestrator | 2025-11-23 00:44:50 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:50.213860 | orchestrator | 2025-11-23 00:44:50 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:50.214060 | orchestrator | 2025-11-23 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:53.282991 | orchestrator | 2025-11-23 00:44:53 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:53.283110 | orchestrator | 2025-11-23 00:44:53 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:53.283655 | orchestrator | 2025-11-23 00:44:53 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:44:53.284295 | orchestrator | 2025-11-23 00:44:53 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:53.285004 | orchestrator | 2025-11-23 00:44:53 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state STARTED 2025-11-23 00:44:53.285416 | orchestrator | 2025-11-23 00:44:53 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:53.285447 | orchestrator | 2025-11-23 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:56.332779 | orchestrator | 2025-11-23 00:44:56 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:56.334572 | orchestrator | 2025-11-23 00:44:56 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:56.335820 | orchestrator | 2025-11-23 00:44:56 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:44:56.338872 | orchestrator | 2025-11-23 00:44:56 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:56.339953 | orchestrator | 2025-11-23 00:44:56 | INFO  | Task 37189602-1a72-478e-ab1a-e5c1461dd41a is in state SUCCESS 2025-11-23 00:44:56.340333 | orchestrator | 2025-11-23 00:44:56.340361 | orchestrator | 2025-11-23 00:44:56.340373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:44:56.340384 | orchestrator | 2025-11-23 00:44:56.340395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:44:56.340407 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.383) 0:00:00.383 ******* 2025-11-23 00:44:56.340418 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:44:56.340430 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:44:56.340440 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:44:56.340507 | orchestrator | 2025-11-23 00:44:56.340521 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:44:56.340532 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.537) 0:00:00.921 ******* 2025-11-23 00:44:56.340543 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-11-23 00:44:56.340555 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-11-23 00:44:56.340566 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-11-23 00:44:56.340577 | orchestrator | 2025-11-23 00:44:56.340588 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-11-23 00:44:56.340599 | orchestrator | 2025-11-23 00:44:56.340688 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-11-23 00:44:56.340702 | orchestrator | Sunday 23 November 2025 00:44:30 +0000 (0:00:00.794) 0:00:01.716 ******* 2025-11-23 00:44:56.340713 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:44:56.340725 | orchestrator | 2025-11-23 00:44:56.340736 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-11-23 00:44:56.341179 | orchestrator | Sunday 23 November 2025 00:44:31 +0000 (0:00:00.983) 0:00:02.699 ******* 2025-11-23 00:44:56.341200 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-23 00:44:56.341250 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-23 00:44:56.341263 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-23 00:44:56.341274 | orchestrator | 2025-11-23 00:44:56.341287 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-11-23 00:44:56.341307 | orchestrator | Sunday 23 November 2025 00:44:32 +0000 (0:00:01.310) 0:00:04.010 ******* 2025-11-23 00:44:56.341326 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-23 00:44:56.341344 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-23 00:44:56.341355 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-23 00:44:56.341366 | orchestrator | 2025-11-23 00:44:56.341377 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-11-23 00:44:56.341408 | orchestrator | Sunday 23 November 2025 00:44:34 +0000 (0:00:02.146) 0:00:06.157 ******* 2025-11-23 00:44:56.341419 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:56.341430 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:56.341441 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:56.341451 | orchestrator | 2025-11-23 00:44:56.341463 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-11-23 00:44:56.341473 | orchestrator | Sunday 23 November 2025 00:44:37 +0000 (0:00:02.218) 0:00:08.376 ******* 2025-11-23 00:44:56.341484 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:56.341495 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:56.341506 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:56.341516 | orchestrator | 2025-11-23 00:44:56.341527 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:44:56.341538 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:44:56.341559 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:44:56.341578 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:44:56.341596 | orchestrator | 2025-11-23 00:44:56.341689 | orchestrator | 2025-11-23 00:44:56.341701 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:44:56.341712 | orchestrator | Sunday 23 November 2025 00:44:40 +0000 (0:00:03.314) 0:00:11.691 ******* 2025-11-23 00:44:56.341723 | orchestrator | =============================================================================== 2025-11-23 00:44:56.341734 | orchestrator | memcached : Restart memcached container --------------------------------- 3.32s 2025-11-23 00:44:56.341744 | orchestrator | memcached : Check memcached container ----------------------------------- 2.22s 2025-11-23 00:44:56.341896 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.15s 2025-11-23 00:44:56.341916 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.31s 2025-11-23 00:44:56.341929 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.98s 2025-11-23 00:44:56.341941 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-11-23 00:44:56.341955 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-11-23 00:44:56.341967 | orchestrator | 2025-11-23 00:44:56.342126 | orchestrator | 2025-11-23 00:44:56.342165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:44:56.342178 | orchestrator | 2025-11-23 00:44:56.342190 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:44:56.342201 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.409) 0:00:00.409 ******* 2025-11-23 00:44:56.342212 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:44:56.342223 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:44:56.342234 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:44:56.342245 | orchestrator | 2025-11-23 00:44:56.342271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:44:56.342282 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.473) 0:00:00.883 ******* 2025-11-23 00:44:56.342293 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-11-23 00:44:56.342304 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-11-23 00:44:56.342315 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-11-23 00:44:56.342326 | orchestrator | 2025-11-23 00:44:56.342337 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-11-23 00:44:56.342349 | orchestrator | 2025-11-23 00:44:56.342360 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-11-23 00:44:56.342371 | orchestrator | Sunday 23 November 2025 00:44:30 +0000 (0:00:00.852) 0:00:01.736 ******* 2025-11-23 00:44:56.342382 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:44:56.342393 | orchestrator | 2025-11-23 00:44:56.342404 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-11-23 00:44:56.342415 | orchestrator | Sunday 23 November 2025 00:44:31 +0000 (0:00:01.126) 0:00:02.863 ******* 2025-11-23 00:44:56.342430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342537 | orchestrator | 2025-11-23 00:44:56.342548 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-11-23 00:44:56.342560 | orchestrator | Sunday 23 November 2025 00:44:33 +0000 (0:00:01.542) 0:00:04.405 ******* 2025-11-23 00:44:56.342572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342719 | orchestrator | 2025-11-23 00:44:56.342730 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-11-23 00:44:56.342741 | orchestrator | Sunday 23 November 2025 00:44:36 +0000 (0:00:03.042) 0:00:07.448 ******* 2025-11-23 00:44:56.342752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342849 | orchestrator | 2025-11-23 00:44:56.342860 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-11-23 00:44:56.342871 | orchestrator | Sunday 23 November 2025 00:44:39 +0000 (0:00:02.888) 0:00:10.337 ******* 2025-11-23 00:44:56.342882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-23 00:44:56.342969 | orchestrator | 2025-11-23 00:44:56.342980 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-23 00:44:56.342991 | orchestrator | Sunday 23 November 2025 00:44:41 +0000 (0:00:01.843) 0:00:12.180 ******* 2025-11-23 00:44:56.343002 | orchestrator | 2025-11-23 00:44:56.343013 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-23 00:44:56.343024 | orchestrator | Sunday 23 November 2025 00:44:41 +0000 (0:00:00.065) 0:00:12.245 ******* 2025-11-23 00:44:56.343034 | orchestrator | 2025-11-23 00:44:56.343045 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-23 00:44:56.343055 | orchestrator | Sunday 23 November 2025 00:44:41 +0000 (0:00:00.072) 0:00:12.318 ******* 2025-11-23 00:44:56.343066 | orchestrator | 2025-11-23 00:44:56.343123 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-11-23 00:44:56.343134 | orchestrator | Sunday 23 November 2025 00:44:41 +0000 (0:00:00.077) 0:00:12.396 ******* 2025-11-23 00:44:56.343144 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:56.343155 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:56.343166 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:56.343177 | orchestrator | 2025-11-23 00:44:56.343188 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-11-23 00:44:56.343198 | orchestrator | Sunday 23 November 2025 00:44:46 +0000 (0:00:05.067) 0:00:17.464 ******* 2025-11-23 00:44:56.343209 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:44:56.343220 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:44:56.343230 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:44:56.343241 | orchestrator | 2025-11-23 00:44:56.343252 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:44:56.343263 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:44:56.343275 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:44:56.343285 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:44:56.343296 | orchestrator | 2025-11-23 00:44:56.343307 | orchestrator | 2025-11-23 00:44:56.343318 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:44:56.343329 | orchestrator | Sunday 23 November 2025 00:44:54 +0000 (0:00:08.151) 0:00:25.616 ******* 2025-11-23 00:44:56.343339 | orchestrator | =============================================================================== 2025-11-23 00:44:56.343350 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.15s 2025-11-23 00:44:56.343361 | orchestrator | redis : Restart redis container ----------------------------------------- 5.07s 2025-11-23 00:44:56.343379 | orchestrator | redis : Copying over default config.json files -------------------------- 3.04s 2025-11-23 00:44:56.343390 | orchestrator | redis : Copying over redis config files --------------------------------- 2.89s 2025-11-23 00:44:56.343400 | orchestrator | redis : Check redis containers ------------------------------------------ 1.84s 2025-11-23 00:44:56.343411 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.54s 2025-11-23 00:44:56.343446 | orchestrator | redis : include_tasks --------------------------------------------------- 1.13s 2025-11-23 00:44:56.343458 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2025-11-23 00:44:56.343468 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-11-23 00:44:56.343479 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2025-11-23 00:44:56.343490 | orchestrator | 2025-11-23 00:44:56 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:56.343501 | orchestrator | 2025-11-23 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:44:59.369113 | orchestrator | 2025-11-23 00:44:59 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:44:59.371760 | orchestrator | 2025-11-23 00:44:59 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:44:59.372846 | orchestrator | 2025-11-23 00:44:59 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:44:59.373143 | orchestrator | 2025-11-23 00:44:59 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:44:59.375204 | orchestrator | 2025-11-23 00:44:59 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:44:59.375247 | orchestrator | 2025-11-23 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:02.441187 | orchestrator | 2025-11-23 00:45:02 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:02.441256 | orchestrator | 2025-11-23 00:45:02 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:02.441812 | orchestrator | 2025-11-23 00:45:02 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:02.442084 | orchestrator | 2025-11-23 00:45:02 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:02.442807 | orchestrator | 2025-11-23 00:45:02 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:02.442861 | orchestrator | 2025-11-23 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:05.485798 | orchestrator | 2025-11-23 00:45:05 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:05.485909 | orchestrator | 2025-11-23 00:45:05 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:05.486382 | orchestrator | 2025-11-23 00:45:05 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:05.487464 | orchestrator | 2025-11-23 00:45:05 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:05.488424 | orchestrator | 2025-11-23 00:45:05 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:05.488488 | orchestrator | 2025-11-23 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:08.512804 | orchestrator | 2025-11-23 00:45:08 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:08.513152 | orchestrator | 2025-11-23 00:45:08 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:08.513860 | orchestrator | 2025-11-23 00:45:08 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:08.514655 | orchestrator | 2025-11-23 00:45:08 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:08.515515 | orchestrator | 2025-11-23 00:45:08 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:08.515559 | orchestrator | 2025-11-23 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:11.547736 | orchestrator | 2025-11-23 00:45:11 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:11.549067 | orchestrator | 2025-11-23 00:45:11 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:11.550195 | orchestrator | 2025-11-23 00:45:11 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:11.551192 | orchestrator | 2025-11-23 00:45:11 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:11.552137 | orchestrator | 2025-11-23 00:45:11 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:11.552323 | orchestrator | 2025-11-23 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:14.579389 | orchestrator | 2025-11-23 00:45:14 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:14.579490 | orchestrator | 2025-11-23 00:45:14 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:14.580681 | orchestrator | 2025-11-23 00:45:14 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:14.581040 | orchestrator | 2025-11-23 00:45:14 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:14.582188 | orchestrator | 2025-11-23 00:45:14 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:14.582247 | orchestrator | 2025-11-23 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:17.612166 | orchestrator | 2025-11-23 00:45:17 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:17.612276 | orchestrator | 2025-11-23 00:45:17 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:17.612964 | orchestrator | 2025-11-23 00:45:17 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:17.613561 | orchestrator | 2025-11-23 00:45:17 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:17.614284 | orchestrator | 2025-11-23 00:45:17 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:17.614752 | orchestrator | 2025-11-23 00:45:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:20.642157 | orchestrator | 2025-11-23 00:45:20 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:20.644006 | orchestrator | 2025-11-23 00:45:20 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:20.645960 | orchestrator | 2025-11-23 00:45:20 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:20.647811 | orchestrator | 2025-11-23 00:45:20 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:20.649256 | orchestrator | 2025-11-23 00:45:20 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:20.649301 | orchestrator | 2025-11-23 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:23.677387 | orchestrator | 2025-11-23 00:45:23 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:23.677737 | orchestrator | 2025-11-23 00:45:23 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:23.678642 | orchestrator | 2025-11-23 00:45:23 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:23.679719 | orchestrator | 2025-11-23 00:45:23 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:23.680927 | orchestrator | 2025-11-23 00:45:23 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:23.681034 | orchestrator | 2025-11-23 00:45:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:26.715914 | orchestrator | 2025-11-23 00:45:26 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:26.716221 | orchestrator | 2025-11-23 00:45:26 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:26.716779 | orchestrator | 2025-11-23 00:45:26 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:26.717522 | orchestrator | 2025-11-23 00:45:26 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:26.718349 | orchestrator | 2025-11-23 00:45:26 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:26.718395 | orchestrator | 2025-11-23 00:45:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:29.746756 | orchestrator | 2025-11-23 00:45:29 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:29.747820 | orchestrator | 2025-11-23 00:45:29 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:29.749466 | orchestrator | 2025-11-23 00:45:29 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:29.751315 | orchestrator | 2025-11-23 00:45:29 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:29.752494 | orchestrator | 2025-11-23 00:45:29 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:29.752520 | orchestrator | 2025-11-23 00:45:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:32.781994 | orchestrator | 2025-11-23 00:45:32 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:32.783430 | orchestrator | 2025-11-23 00:45:32 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:32.783934 | orchestrator | 2025-11-23 00:45:32 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:32.785793 | orchestrator | 2025-11-23 00:45:32 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:32.786723 | orchestrator | 2025-11-23 00:45:32 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state STARTED 2025-11-23 00:45:32.786817 | orchestrator | 2025-11-23 00:45:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:35.817871 | orchestrator | 2025-11-23 00:45:35 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:35.817976 | orchestrator | 2025-11-23 00:45:35 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:35.819513 | orchestrator | 2025-11-23 00:45:35 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:35.819882 | orchestrator | 2025-11-23 00:45:35 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:35.821262 | orchestrator | 2025-11-23 00:45:35.821301 | orchestrator | 2025-11-23 00:45:35 | INFO  | Task 2e6439c1-1aa1-4971-81e2-90ea2102d3cf is in state SUCCESS 2025-11-23 00:45:35.823260 | orchestrator | 2025-11-23 00:45:35.823304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:45:35.823317 | orchestrator | 2025-11-23 00:45:35.823329 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:45:35.823340 | orchestrator | Sunday 23 November 2025 00:44:30 +0000 (0:00:00.705) 0:00:00.705 ******* 2025-11-23 00:45:35.823351 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:45:35.823363 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:45:35.823375 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:45:35.823393 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:35.823412 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:35.823431 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:35.823450 | orchestrator | 2025-11-23 00:45:35.823461 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:45:35.823472 | orchestrator | Sunday 23 November 2025 00:44:31 +0000 (0:00:01.243) 0:00:01.949 ******* 2025-11-23 00:45:35.823483 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-23 00:45:35.823494 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-23 00:45:35.823505 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-23 00:45:35.823515 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-23 00:45:35.823526 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-23 00:45:35.823537 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-23 00:45:35.823548 | orchestrator | 2025-11-23 00:45:35.823558 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-11-23 00:45:35.823569 | orchestrator | 2025-11-23 00:45:35.823580 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-11-23 00:45:35.823591 | orchestrator | Sunday 23 November 2025 00:44:32 +0000 (0:00:01.114) 0:00:03.064 ******* 2025-11-23 00:45:35.823669 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:45:35.823682 | orchestrator | 2025-11-23 00:45:35.823693 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-23 00:45:35.823703 | orchestrator | Sunday 23 November 2025 00:44:33 +0000 (0:00:01.434) 0:00:04.498 ******* 2025-11-23 00:45:35.823714 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-23 00:45:35.823726 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-23 00:45:35.823736 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-23 00:45:35.823747 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-23 00:45:35.823758 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-23 00:45:35.823768 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-23 00:45:35.823779 | orchestrator | 2025-11-23 00:45:35.823790 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-23 00:45:35.823800 | orchestrator | Sunday 23 November 2025 00:44:35 +0000 (0:00:01.218) 0:00:05.717 ******* 2025-11-23 00:45:35.823811 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-23 00:45:35.823822 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-23 00:45:35.823837 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-23 00:45:35.823858 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-23 00:45:35.823877 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-23 00:45:35.823897 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-23 00:45:35.823916 | orchestrator | 2025-11-23 00:45:35.823935 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-23 00:45:35.823972 | orchestrator | Sunday 23 November 2025 00:44:37 +0000 (0:00:02.417) 0:00:08.135 ******* 2025-11-23 00:45:35.823991 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-11-23 00:45:35.824011 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:35.824031 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-11-23 00:45:35.824050 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:35.824068 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-11-23 00:45:35.824087 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:35.824104 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-11-23 00:45:35.824121 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:35.824140 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-11-23 00:45:35.824157 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:35.824168 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-11-23 00:45:35.824180 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:35.824189 | orchestrator | 2025-11-23 00:45:35.824199 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-11-23 00:45:35.824217 | orchestrator | Sunday 23 November 2025 00:44:38 +0000 (0:00:01.287) 0:00:09.422 ******* 2025-11-23 00:45:35.824227 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:35.824236 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:35.824246 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:35.824255 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:35.824265 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:35.824274 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:35.824284 | orchestrator | 2025-11-23 00:45:35.824293 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-11-23 00:45:35.824303 | orchestrator | Sunday 23 November 2025 00:44:39 +0000 (0:00:00.760) 0:00:10.183 ******* 2025-11-23 00:45:35.824330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824556 | orchestrator | 2025-11-23 00:45:35.824566 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-11-23 00:45:35.824576 | orchestrator | Sunday 23 November 2025 00:44:42 +0000 (0:00:02.477) 0:00:12.661 ******* 2025-11-23 00:45:35.824587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824772 | orchestrator | 2025-11-23 00:45:35.824782 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-11-23 00:45:35.824792 | orchestrator | Sunday 23 November 2025 00:44:46 +0000 (0:00:03.998) 0:00:16.659 ******* 2025-11-23 00:45:35.824802 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:35.824812 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:35.824821 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:35.824831 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:35.824840 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:35.824850 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:35.824859 | orchestrator | 2025-11-23 00:45:35.824869 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-11-23 00:45:35.824884 | orchestrator | Sunday 23 November 2025 00:44:47 +0000 (0:00:01.596) 0:00:18.256 ******* 2025-11-23 00:45:35.824894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-23 00:45:35.824990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.825004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.825020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.825042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-23 00:45:35.825052 | orchestrator | 2025-11-23 00:45:35.825062 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-23 00:45:35.825072 | orchestrator | Sunday 23 November 2025 00:44:50 +0000 (0:00:02.956) 0:00:21.213 ******* 2025-11-23 00:45:35.825082 | orchestrator | 2025-11-23 00:45:35.825091 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-23 00:45:35.825101 | orchestrator | Sunday 23 November 2025 00:44:50 +0000 (0:00:00.241) 0:00:21.454 ******* 2025-11-23 00:45:35.825111 | orchestrator | 2025-11-23 00:45:35.825120 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-23 00:45:35.825130 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.274) 0:00:21.729 ******* 2025-11-23 00:45:35.825139 | orchestrator | 2025-11-23 00:45:35.825149 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-23 00:45:35.825158 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.131) 0:00:21.861 ******* 2025-11-23 00:45:35.825168 | orchestrator | 2025-11-23 00:45:35.825177 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-23 00:45:35.825187 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.189) 0:00:22.050 ******* 2025-11-23 00:45:35.825196 | orchestrator | 2025-11-23 00:45:35.825206 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-23 00:45:35.825215 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.150) 0:00:22.200 ******* 2025-11-23 00:45:35.825225 | orchestrator | 2025-11-23 00:45:35.825234 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-11-23 00:45:35.825244 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.178) 0:00:22.379 ******* 2025-11-23 00:45:35.825253 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:35.825263 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:35.825272 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:35.825282 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:35.825291 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:35.825301 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:35.825310 | orchestrator | 2025-11-23 00:45:35.825320 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-11-23 00:45:35.825330 | orchestrator | Sunday 23 November 2025 00:45:03 +0000 (0:00:11.289) 0:00:33.669 ******* 2025-11-23 00:45:35.825339 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:45:35.825349 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:45:35.825358 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:45:35.825368 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:35.825377 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:35.825387 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:35.825396 | orchestrator | 2025-11-23 00:45:35.825406 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-23 00:45:35.825416 | orchestrator | Sunday 23 November 2025 00:45:04 +0000 (0:00:01.250) 0:00:34.919 ******* 2025-11-23 00:45:35.825426 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:35.825435 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:35.825445 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:35.825454 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:35.825464 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:35.825478 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:35.825488 | orchestrator | 2025-11-23 00:45:35.825497 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-11-23 00:45:35.825511 | orchestrator | Sunday 23 November 2025 00:45:14 +0000 (0:00:10.314) 0:00:45.233 ******* 2025-11-23 00:45:35.825521 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-11-23 00:45:35.825531 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-11-23 00:45:35.825540 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-11-23 00:45:35.825550 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-11-23 00:45:35.825560 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-11-23 00:45:35.825574 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-11-23 00:45:35.825584 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-11-23 00:45:35.825594 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-11-23 00:45:35.825623 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-11-23 00:45:35.825633 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-11-23 00:45:35.825642 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-11-23 00:45:35.825651 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-11-23 00:45:35.825661 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-23 00:45:35.825670 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-23 00:45:35.825680 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-23 00:45:35.825689 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-23 00:45:35.825699 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-23 00:45:35.825708 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-23 00:45:35.825718 | orchestrator | 2025-11-23 00:45:35.825728 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-11-23 00:45:35.825737 | orchestrator | Sunday 23 November 2025 00:45:21 +0000 (0:00:06.598) 0:00:51.832 ******* 2025-11-23 00:45:35.825747 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-11-23 00:45:35.825757 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:35.825766 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-11-23 00:45:35.825776 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:35.825786 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-11-23 00:45:35.825795 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:35.825805 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-11-23 00:45:35.825814 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-11-23 00:45:35.825824 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-11-23 00:45:35.825834 | orchestrator | 2025-11-23 00:45:35.825843 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-11-23 00:45:35.825858 | orchestrator | Sunday 23 November 2025 00:45:24 +0000 (0:00:02.947) 0:00:54.780 ******* 2025-11-23 00:45:35.825868 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-11-23 00:45:35.825877 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:35.825887 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-11-23 00:45:35.825896 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:35.825906 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-11-23 00:45:35.825915 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:35.825925 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-11-23 00:45:35.825935 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-11-23 00:45:35.825944 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-11-23 00:45:35.825954 | orchestrator | 2025-11-23 00:45:35.825964 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-23 00:45:35.825973 | orchestrator | Sunday 23 November 2025 00:45:27 +0000 (0:00:03.526) 0:00:58.307 ******* 2025-11-23 00:45:35.825982 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:35.825992 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:35.826001 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:35.826011 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:35.826084 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:35.826102 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:35.826118 | orchestrator | 2025-11-23 00:45:35.826135 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:45:35.826160 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-23 00:45:35.826181 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-23 00:45:35.826193 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-23 00:45:35.826203 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 00:45:35.826212 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 00:45:35.826230 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 00:45:35.826240 | orchestrator | 2025-11-23 00:45:35.826250 | orchestrator | 2025-11-23 00:45:35.826260 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:45:35.826269 | orchestrator | Sunday 23 November 2025 00:45:34 +0000 (0:00:07.258) 0:01:05.566 ******* 2025-11-23 00:45:35.826279 | orchestrator | =============================================================================== 2025-11-23 00:45:35.826289 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.57s 2025-11-23 00:45:35.826298 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.29s 2025-11-23 00:45:35.826308 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.60s 2025-11-23 00:45:35.826317 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.00s 2025-11-23 00:45:35.826327 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.53s 2025-11-23 00:45:35.826336 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.96s 2025-11-23 00:45:35.826346 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.95s 2025-11-23 00:45:35.826356 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.48s 2025-11-23 00:45:35.826372 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.42s 2025-11-23 00:45:35.826382 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.60s 2025-11-23 00:45:35.826392 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.43s 2025-11-23 00:45:35.826401 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.29s 2025-11-23 00:45:35.826411 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.25s 2025-11-23 00:45:35.826420 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.24s 2025-11-23 00:45:35.826430 | orchestrator | module-load : Load modules ---------------------------------------------- 1.22s 2025-11-23 00:45:35.826439 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.17s 2025-11-23 00:45:35.826449 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.11s 2025-11-23 00:45:35.826459 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.76s 2025-11-23 00:45:35.826468 | orchestrator | 2025-11-23 00:45:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:38.846540 | orchestrator | 2025-11-23 00:45:38 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:38.850156 | orchestrator | 2025-11-23 00:45:38 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:45:38.852884 | orchestrator | 2025-11-23 00:45:38 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:38.854839 | orchestrator | 2025-11-23 00:45:38 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:38.856002 | orchestrator | 2025-11-23 00:45:38 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:38.856396 | orchestrator | 2025-11-23 00:45:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:41.885213 | orchestrator | 2025-11-23 00:45:41 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:41.887234 | orchestrator | 2025-11-23 00:45:41 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:45:41.889092 | orchestrator | 2025-11-23 00:45:41 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:41.891204 | orchestrator | 2025-11-23 00:45:41 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:41.894199 | orchestrator | 2025-11-23 00:45:41 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:41.894220 | orchestrator | 2025-11-23 00:45:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:44.932697 | orchestrator | 2025-11-23 00:45:44 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:44.933109 | orchestrator | 2025-11-23 00:45:44 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:45:44.934228 | orchestrator | 2025-11-23 00:45:44 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:44.935860 | orchestrator | 2025-11-23 00:45:44 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:44.936786 | orchestrator | 2025-11-23 00:45:44 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:44.936829 | orchestrator | 2025-11-23 00:45:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:47.979071 | orchestrator | 2025-11-23 00:45:47 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:47.979308 | orchestrator | 2025-11-23 00:45:47 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:45:47.980144 | orchestrator | 2025-11-23 00:45:47 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:47.980855 | orchestrator | 2025-11-23 00:45:47 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:47.982094 | orchestrator | 2025-11-23 00:45:47 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:47.982168 | orchestrator | 2025-11-23 00:45:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:51.020167 | orchestrator | 2025-11-23 00:45:51 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:51.020807 | orchestrator | 2025-11-23 00:45:51 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:45:51.021469 | orchestrator | 2025-11-23 00:45:51 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:51.022924 | orchestrator | 2025-11-23 00:45:51 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:51.024217 | orchestrator | 2025-11-23 00:45:51 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:51.024244 | orchestrator | 2025-11-23 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:54.086187 | orchestrator | 2025-11-23 00:45:54 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:54.086290 | orchestrator | 2025-11-23 00:45:54 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:45:54.088124 | orchestrator | 2025-11-23 00:45:54 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:54.088672 | orchestrator | 2025-11-23 00:45:54 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:54.089319 | orchestrator | 2025-11-23 00:45:54 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state STARTED 2025-11-23 00:45:54.089355 | orchestrator | 2025-11-23 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:57.402005 | orchestrator | 2025-11-23 00:45:57 | INFO  | Task f5f0f9ed-ef34-4b8a-9009-d66ea267b3de is in state STARTED 2025-11-23 00:45:57.402531 | orchestrator | 2025-11-23 00:45:57 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:45:57.403388 | orchestrator | 2025-11-23 00:45:57 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:45:57.406076 | orchestrator | 2025-11-23 00:45:57 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:45:57.406882 | orchestrator | 2025-11-23 00:45:57 | INFO  | Task e38cebc4-f018-4c6f-9646-79fd0365d193 is in state STARTED 2025-11-23 00:45:57.409176 | orchestrator | 2025-11-23 00:45:57 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:45:57.415066 | orchestrator | 2025-11-23 00:45:57.415103 | orchestrator | 2025-11-23 00:45:57.415116 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-11-23 00:45:57.415129 | orchestrator | 2025-11-23 00:45:57.415141 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-11-23 00:45:57.415153 | orchestrator | Sunday 23 November 2025 00:41:46 +0000 (0:00:00.166) 0:00:00.166 ******* 2025-11-23 00:45:57.415226 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:45:57.415241 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:45:57.415252 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:45:57.415262 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.415273 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.415284 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.415294 | orchestrator | 2025-11-23 00:45:57.415305 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-11-23 00:45:57.415362 | orchestrator | Sunday 23 November 2025 00:41:47 +0000 (0:00:00.666) 0:00:00.833 ******* 2025-11-23 00:45:57.415375 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.415388 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.415411 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.415422 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.415433 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.415444 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.415454 | orchestrator | 2025-11-23 00:45:57.415465 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-11-23 00:45:57.415476 | orchestrator | Sunday 23 November 2025 00:41:47 +0000 (0:00:00.584) 0:00:01.418 ******* 2025-11-23 00:45:57.415487 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.415498 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.415508 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.415519 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.415530 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.415575 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.415643 | orchestrator | 2025-11-23 00:45:57.415656 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-11-23 00:45:57.415669 | orchestrator | Sunday 23 November 2025 00:41:48 +0000 (0:00:00.680) 0:00:02.098 ******* 2025-11-23 00:45:57.415682 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.415693 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.415705 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.415717 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.415729 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.415741 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.415753 | orchestrator | 2025-11-23 00:45:57.415766 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-11-23 00:45:57.415778 | orchestrator | Sunday 23 November 2025 00:41:50 +0000 (0:00:01.841) 0:00:03.940 ******* 2025-11-23 00:45:57.415790 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.415802 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.415813 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.415826 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.415837 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.415849 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.415861 | orchestrator | 2025-11-23 00:45:57.415873 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-11-23 00:45:57.415885 | orchestrator | Sunday 23 November 2025 00:41:51 +0000 (0:00:01.326) 0:00:05.267 ******* 2025-11-23 00:45:57.415897 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.415909 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.415920 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.415933 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.415945 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.415957 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.415967 | orchestrator | 2025-11-23 00:45:57.415978 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-11-23 00:45:57.415989 | orchestrator | Sunday 23 November 2025 00:41:52 +0000 (0:00:00.891) 0:00:06.158 ******* 2025-11-23 00:45:57.416000 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.416011 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.416021 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.416032 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.416043 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.416053 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.416064 | orchestrator | 2025-11-23 00:45:57.416075 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-11-23 00:45:57.416085 | orchestrator | Sunday 23 November 2025 00:41:52 +0000 (0:00:00.556) 0:00:06.715 ******* 2025-11-23 00:45:57.416096 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.416116 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.416127 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.416137 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.416148 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.416159 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.416169 | orchestrator | 2025-11-23 00:45:57.416180 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-11-23 00:45:57.416191 | orchestrator | Sunday 23 November 2025 00:41:53 +0000 (0:00:00.690) 0:00:07.406 ******* 2025-11-23 00:45:57.416202 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 00:45:57.416213 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 00:45:57.416223 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.416234 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 00:45:57.416245 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 00:45:57.416256 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.416267 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 00:45:57.416278 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 00:45:57.416288 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.416299 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 00:45:57.416323 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 00:45:57.416334 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.416345 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 00:45:57.416356 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 00:45:57.416366 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.416377 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 00:45:57.416388 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 00:45:57.416399 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.416410 | orchestrator | 2025-11-23 00:45:57.416420 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-11-23 00:45:57.416431 | orchestrator | Sunday 23 November 2025 00:41:54 +0000 (0:00:00.937) 0:00:08.343 ******* 2025-11-23 00:45:57.416441 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.416458 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.416469 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.416480 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.416490 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.416501 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.416511 | orchestrator | 2025-11-23 00:45:57.416522 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-11-23 00:45:57.416534 | orchestrator | Sunday 23 November 2025 00:41:55 +0000 (0:00:01.305) 0:00:09.649 ******* 2025-11-23 00:45:57.416544 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:45:57.416555 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:45:57.416566 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:45:57.416576 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.416615 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.416627 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.416638 | orchestrator | 2025-11-23 00:45:57.416649 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-11-23 00:45:57.416660 | orchestrator | Sunday 23 November 2025 00:41:56 +0000 (0:00:00.967) 0:00:10.616 ******* 2025-11-23 00:45:57.416670 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.416681 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.416692 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.416710 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.416721 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.416732 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.416743 | orchestrator | 2025-11-23 00:45:57.416754 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-11-23 00:45:57.416765 | orchestrator | Sunday 23 November 2025 00:42:02 +0000 (0:00:05.244) 0:00:15.861 ******* 2025-11-23 00:45:57.416776 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.416786 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.416797 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.416808 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.416819 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.416830 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.416840 | orchestrator | 2025-11-23 00:45:57.416851 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-11-23 00:45:57.416862 | orchestrator | Sunday 23 November 2025 00:42:03 +0000 (0:00:01.457) 0:00:17.318 ******* 2025-11-23 00:45:57.416873 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.416883 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.416894 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.416905 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.416916 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.416926 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.416937 | orchestrator | 2025-11-23 00:45:57.416948 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-11-23 00:45:57.416960 | orchestrator | Sunday 23 November 2025 00:42:05 +0000 (0:00:01.943) 0:00:19.262 ******* 2025-11-23 00:45:57.416971 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.416981 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.416992 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.417003 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.417014 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.417024 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.417035 | orchestrator | 2025-11-23 00:45:57.417046 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-11-23 00:45:57.417057 | orchestrator | Sunday 23 November 2025 00:42:06 +0000 (0:00:00.889) 0:00:20.151 ******* 2025-11-23 00:45:57.417067 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-11-23 00:45:57.417078 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-11-23 00:45:57.417089 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.417100 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-11-23 00:45:57.417110 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-11-23 00:45:57.417121 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.417132 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-11-23 00:45:57.417143 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-11-23 00:45:57.417153 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.417164 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-11-23 00:45:57.417175 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-11-23 00:45:57.417186 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-11-23 00:45:57.417197 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-11-23 00:45:57.417207 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.417218 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.417229 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-11-23 00:45:57.417240 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-11-23 00:45:57.417250 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.417261 | orchestrator | 2025-11-23 00:45:57.417272 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-11-23 00:45:57.417296 | orchestrator | Sunday 23 November 2025 00:42:07 +0000 (0:00:01.198) 0:00:21.350 ******* 2025-11-23 00:45:57.417307 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.417318 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.417329 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.417340 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.417350 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.417361 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.417372 | orchestrator | 2025-11-23 00:45:57.417383 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2025-11-23 00:45:57.417394 | orchestrator | Sunday 23 November 2025 00:42:08 +0000 (0:00:01.327) 0:00:22.678 ******* 2025-11-23 00:45:57.417405 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.417415 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.417426 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.417436 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.417447 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.417458 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.417468 | orchestrator | 2025-11-23 00:45:57.417484 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-11-23 00:45:57.417495 | orchestrator | 2025-11-23 00:45:57.417506 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-11-23 00:45:57.417517 | orchestrator | Sunday 23 November 2025 00:42:10 +0000 (0:00:01.662) 0:00:24.340 ******* 2025-11-23 00:45:57.417528 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.417538 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.417549 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.417560 | orchestrator | 2025-11-23 00:45:57.417571 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-11-23 00:45:57.417582 | orchestrator | Sunday 23 November 2025 00:42:12 +0000 (0:00:01.671) 0:00:26.011 ******* 2025-11-23 00:45:57.417639 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.417651 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.417661 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.417672 | orchestrator | 2025-11-23 00:45:57.417683 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-11-23 00:45:57.417694 | orchestrator | Sunday 23 November 2025 00:42:13 +0000 (0:00:01.035) 0:00:27.047 ******* 2025-11-23 00:45:57.417705 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.417716 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.417727 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.417737 | orchestrator | 2025-11-23 00:45:57.417748 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-11-23 00:45:57.417759 | orchestrator | Sunday 23 November 2025 00:42:14 +0000 (0:00:00.850) 0:00:27.897 ******* 2025-11-23 00:45:57.417770 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.417781 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.417791 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.417802 | orchestrator | 2025-11-23 00:45:57.417813 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-11-23 00:45:57.417824 | orchestrator | Sunday 23 November 2025 00:42:14 +0000 (0:00:00.829) 0:00:28.726 ******* 2025-11-23 00:45:57.417834 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.417845 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.417856 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.417867 | orchestrator | 2025-11-23 00:45:57.417878 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-11-23 00:45:57.417889 | orchestrator | Sunday 23 November 2025 00:42:15 +0000 (0:00:00.384) 0:00:29.111 ******* 2025-11-23 00:45:57.417918 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.417929 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.417940 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.417951 | orchestrator | 2025-11-23 00:45:57.417962 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-11-23 00:45:57.417979 | orchestrator | Sunday 23 November 2025 00:42:16 +0000 (0:00:01.396) 0:00:30.508 ******* 2025-11-23 00:45:57.417989 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.417998 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.418008 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.418057 | orchestrator | 2025-11-23 00:45:57.418108 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-11-23 00:45:57.418120 | orchestrator | Sunday 23 November 2025 00:42:18 +0000 (0:00:01.719) 0:00:32.227 ******* 2025-11-23 00:45:57.418129 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:45:57.418139 | orchestrator | 2025-11-23 00:45:57.418148 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-11-23 00:45:57.418158 | orchestrator | Sunday 23 November 2025 00:42:18 +0000 (0:00:00.389) 0:00:32.617 ******* 2025-11-23 00:45:57.418168 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.418177 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.418187 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.418196 | orchestrator | 2025-11-23 00:45:57.418206 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-11-23 00:45:57.418215 | orchestrator | Sunday 23 November 2025 00:42:20 +0000 (0:00:02.126) 0:00:34.744 ******* 2025-11-23 00:45:57.418225 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.418234 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.418244 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.418253 | orchestrator | 2025-11-23 00:45:57.418263 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-11-23 00:45:57.418272 | orchestrator | Sunday 23 November 2025 00:42:21 +0000 (0:00:00.666) 0:00:35.411 ******* 2025-11-23 00:45:57.418282 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.418291 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.418301 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.418310 | orchestrator | 2025-11-23 00:45:57.418319 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-11-23 00:45:57.418329 | orchestrator | Sunday 23 November 2025 00:42:22 +0000 (0:00:01.082) 0:00:36.493 ******* 2025-11-23 00:45:57.418338 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.418348 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.418357 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.418367 | orchestrator | 2025-11-23 00:45:57.418376 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-11-23 00:45:57.418394 | orchestrator | Sunday 23 November 2025 00:42:24 +0000 (0:00:02.062) 0:00:38.555 ******* 2025-11-23 00:45:57.418404 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.418414 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.418424 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.418433 | orchestrator | 2025-11-23 00:45:57.418443 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-11-23 00:45:57.418452 | orchestrator | Sunday 23 November 2025 00:42:25 +0000 (0:00:00.598) 0:00:39.154 ******* 2025-11-23 00:45:57.418462 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.418472 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.418481 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.418491 | orchestrator | 2025-11-23 00:45:57.418500 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-11-23 00:45:57.418510 | orchestrator | Sunday 23 November 2025 00:42:25 +0000 (0:00:00.403) 0:00:39.557 ******* 2025-11-23 00:45:57.418520 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.418529 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.418539 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.418549 | orchestrator | 2025-11-23 00:45:57.419161 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2025-11-23 00:45:57.419197 | orchestrator | Sunday 23 November 2025 00:42:27 +0000 (0:00:01.804) 0:00:41.361 ******* 2025-11-23 00:45:57.419229 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.419247 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.419265 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.419283 | orchestrator | 2025-11-23 00:45:57.419302 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2025-11-23 00:45:57.419320 | orchestrator | Sunday 23 November 2025 00:42:30 +0000 (0:00:03.044) 0:00:44.406 ******* 2025-11-23 00:45:57.419337 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.419356 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.419373 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.419391 | orchestrator | 2025-11-23 00:45:57.419409 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-11-23 00:45:57.419428 | orchestrator | Sunday 23 November 2025 00:42:31 +0000 (0:00:00.786) 0:00:45.192 ******* 2025-11-23 00:45:57.419446 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-23 00:45:57.419465 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-23 00:45:57.419483 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-23 00:45:57.419502 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-23 00:45:57.419520 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-23 00:45:57.419543 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-23 00:45:57.419562 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-23 00:45:57.419577 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-23 00:45:57.419614 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-23 00:45:57.419630 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-23 00:45:57.419646 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-23 00:45:57.419662 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-23 00:45:57.419676 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-11-23 00:45:57.419691 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-11-23 00:45:57.419706 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-11-23 00:45:57.419722 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.419738 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.419753 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.419770 | orchestrator | 2025-11-23 00:45:57.419786 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-11-23 00:45:57.419804 | orchestrator | Sunday 23 November 2025 00:43:25 +0000 (0:00:54.181) 0:01:39.374 ******* 2025-11-23 00:45:57.419821 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.419837 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.419854 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.419884 | orchestrator | 2025-11-23 00:45:57.419901 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-11-23 00:45:57.419931 | orchestrator | Sunday 23 November 2025 00:43:25 +0000 (0:00:00.316) 0:01:39.690 ******* 2025-11-23 00:45:57.419948 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420031 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420046 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420055 | orchestrator | 2025-11-23 00:45:57.420065 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-11-23 00:45:57.420075 | orchestrator | Sunday 23 November 2025 00:43:26 +0000 (0:00:00.935) 0:01:40.626 ******* 2025-11-23 00:45:57.420084 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420094 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420103 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420112 | orchestrator | 2025-11-23 00:45:57.420122 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-11-23 00:45:57.420131 | orchestrator | Sunday 23 November 2025 00:43:28 +0000 (0:00:01.203) 0:01:41.829 ******* 2025-11-23 00:45:57.420141 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420150 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420160 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420169 | orchestrator | 2025-11-23 00:45:57.420178 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-11-23 00:45:57.420188 | orchestrator | Sunday 23 November 2025 00:43:53 +0000 (0:00:25.742) 0:02:07.571 ******* 2025-11-23 00:45:57.420197 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.420207 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.420216 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.420225 | orchestrator | 2025-11-23 00:45:57.420235 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-11-23 00:45:57.420244 | orchestrator | Sunday 23 November 2025 00:43:54 +0000 (0:00:00.743) 0:02:08.315 ******* 2025-11-23 00:45:57.420254 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.420263 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.420273 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.420282 | orchestrator | 2025-11-23 00:45:57.420291 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-11-23 00:45:57.420301 | orchestrator | Sunday 23 November 2025 00:43:55 +0000 (0:00:00.636) 0:02:08.952 ******* 2025-11-23 00:45:57.420310 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420320 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420329 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420338 | orchestrator | 2025-11-23 00:45:57.420348 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-11-23 00:45:57.420357 | orchestrator | Sunday 23 November 2025 00:43:55 +0000 (0:00:00.674) 0:02:09.626 ******* 2025-11-23 00:45:57.420367 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.420376 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.420386 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.420395 | orchestrator | 2025-11-23 00:45:57.420404 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-11-23 00:45:57.420414 | orchestrator | Sunday 23 November 2025 00:43:56 +0000 (0:00:00.808) 0:02:10.435 ******* 2025-11-23 00:45:57.420423 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.420433 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.420442 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.420451 | orchestrator | 2025-11-23 00:45:57.420461 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-11-23 00:45:57.420470 | orchestrator | Sunday 23 November 2025 00:43:56 +0000 (0:00:00.261) 0:02:10.697 ******* 2025-11-23 00:45:57.420480 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420489 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420499 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420508 | orchestrator | 2025-11-23 00:45:57.420524 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-11-23 00:45:57.420541 | orchestrator | Sunday 23 November 2025 00:43:57 +0000 (0:00:00.669) 0:02:11.367 ******* 2025-11-23 00:45:57.420551 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420561 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420570 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420579 | orchestrator | 2025-11-23 00:45:57.420640 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-11-23 00:45:57.420651 | orchestrator | Sunday 23 November 2025 00:43:58 +0000 (0:00:00.599) 0:02:11.966 ******* 2025-11-23 00:45:57.420661 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420670 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420680 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420689 | orchestrator | 2025-11-23 00:45:57.420699 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-11-23 00:45:57.420708 | orchestrator | Sunday 23 November 2025 00:43:59 +0000 (0:00:00.967) 0:02:12.934 ******* 2025-11-23 00:45:57.420717 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:45:57.420727 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:45:57.420736 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:45:57.420745 | orchestrator | 2025-11-23 00:45:57.420770 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-11-23 00:45:57.420780 | orchestrator | Sunday 23 November 2025 00:43:59 +0000 (0:00:00.786) 0:02:13.720 ******* 2025-11-23 00:45:57.420789 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.420799 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.420808 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.420818 | orchestrator | 2025-11-23 00:45:57.420827 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-11-23 00:45:57.420837 | orchestrator | Sunday 23 November 2025 00:44:00 +0000 (0:00:00.254) 0:02:13.975 ******* 2025-11-23 00:45:57.420846 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.420855 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.420865 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.420874 | orchestrator | 2025-11-23 00:45:57.420883 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-11-23 00:45:57.420893 | orchestrator | Sunday 23 November 2025 00:44:00 +0000 (0:00:00.255) 0:02:14.230 ******* 2025-11-23 00:45:57.420902 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.420912 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.420921 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.420930 | orchestrator | 2025-11-23 00:45:57.420940 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-11-23 00:45:57.420949 | orchestrator | Sunday 23 November 2025 00:44:01 +0000 (0:00:00.780) 0:02:15.011 ******* 2025-11-23 00:45:57.420959 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.420976 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.420986 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.420996 | orchestrator | 2025-11-23 00:45:57.421006 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-11-23 00:45:57.421016 | orchestrator | Sunday 23 November 2025 00:44:01 +0000 (0:00:00.639) 0:02:15.650 ******* 2025-11-23 00:45:57.421025 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-23 00:45:57.421035 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-23 00:45:57.421044 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-23 00:45:57.421054 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-23 00:45:57.421063 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-23 00:45:57.421073 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-23 00:45:57.421089 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-23 00:45:57.421097 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-23 00:45:57.421105 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-23 00:45:57.421113 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-23 00:45:57.421120 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-11-23 00:45:57.421128 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-23 00:45:57.421136 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-23 00:45:57.421143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-11-23 00:45:57.421151 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-23 00:45:57.421159 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-23 00:45:57.421166 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-23 00:45:57.421174 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-23 00:45:57.421182 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-23 00:45:57.421190 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-23 00:45:57.421197 | orchestrator | 2025-11-23 00:45:57.421209 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-11-23 00:45:57.421217 | orchestrator | 2025-11-23 00:45:57.421225 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-11-23 00:45:57.421232 | orchestrator | Sunday 23 November 2025 00:44:05 +0000 (0:00:03.207) 0:02:18.858 ******* 2025-11-23 00:45:57.421240 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:45:57.421248 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:45:57.421255 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:45:57.421263 | orchestrator | 2025-11-23 00:45:57.421271 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-11-23 00:45:57.421278 | orchestrator | Sunday 23 November 2025 00:44:05 +0000 (0:00:00.385) 0:02:19.243 ******* 2025-11-23 00:45:57.421286 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:45:57.421294 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:45:57.421301 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:45:57.421309 | orchestrator | 2025-11-23 00:45:57.421317 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-11-23 00:45:57.421325 | orchestrator | Sunday 23 November 2025 00:44:06 +0000 (0:00:00.608) 0:02:19.852 ******* 2025-11-23 00:45:57.421332 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:45:57.421340 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:45:57.421347 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:45:57.421355 | orchestrator | 2025-11-23 00:45:57.421363 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-11-23 00:45:57.421371 | orchestrator | Sunday 23 November 2025 00:44:06 +0000 (0:00:00.265) 0:02:20.117 ******* 2025-11-23 00:45:57.421379 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:45:57.421386 | orchestrator | 2025-11-23 00:45:57.421394 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-11-23 00:45:57.421402 | orchestrator | Sunday 23 November 2025 00:44:06 +0000 (0:00:00.507) 0:02:20.624 ******* 2025-11-23 00:45:57.421410 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.421417 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.421425 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.421433 | orchestrator | 2025-11-23 00:45:57.421445 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-11-23 00:45:57.421453 | orchestrator | Sunday 23 November 2025 00:44:07 +0000 (0:00:00.291) 0:02:20.916 ******* 2025-11-23 00:45:57.421460 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.421468 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.421476 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.421483 | orchestrator | 2025-11-23 00:45:57.421491 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-11-23 00:45:57.421503 | orchestrator | Sunday 23 November 2025 00:44:07 +0000 (0:00:00.255) 0:02:21.172 ******* 2025-11-23 00:45:57.421511 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:45:57.421519 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:45:57.421527 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:45:57.421534 | orchestrator | 2025-11-23 00:45:57.421542 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-11-23 00:45:57.421550 | orchestrator | Sunday 23 November 2025 00:44:07 +0000 (0:00:00.269) 0:02:21.441 ******* 2025-11-23 00:45:57.421558 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.421565 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.421573 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.421581 | orchestrator | 2025-11-23 00:45:57.421600 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-11-23 00:45:57.421608 | orchestrator | Sunday 23 November 2025 00:44:08 +0000 (0:00:00.711) 0:02:22.153 ******* 2025-11-23 00:45:57.421616 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.421624 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.421632 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.421640 | orchestrator | 2025-11-23 00:45:57.421647 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-11-23 00:45:57.421655 | orchestrator | Sunday 23 November 2025 00:44:09 +0000 (0:00:01.047) 0:02:23.200 ******* 2025-11-23 00:45:57.421663 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.421671 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.421679 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.421686 | orchestrator | 2025-11-23 00:45:57.421694 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-11-23 00:45:57.421702 | orchestrator | Sunday 23 November 2025 00:44:10 +0000 (0:00:01.146) 0:02:24.347 ******* 2025-11-23 00:45:57.421710 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:45:57.421718 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:45:57.421726 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:45:57.421733 | orchestrator | 2025-11-23 00:45:57.421741 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-23 00:45:57.421749 | orchestrator | 2025-11-23 00:45:57.421757 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-23 00:45:57.421765 | orchestrator | Sunday 23 November 2025 00:44:20 +0000 (0:00:09.797) 0:02:34.145 ******* 2025-11-23 00:45:57.421773 | orchestrator | ok: [testbed-manager] 2025-11-23 00:45:57.421780 | orchestrator | 2025-11-23 00:45:57.421788 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-23 00:45:57.421796 | orchestrator | Sunday 23 November 2025 00:44:21 +0000 (0:00:00.731) 0:02:34.876 ******* 2025-11-23 00:45:57.421804 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.421812 | orchestrator | 2025-11-23 00:45:57.421819 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-23 00:45:57.421827 | orchestrator | Sunday 23 November 2025 00:44:21 +0000 (0:00:00.401) 0:02:35.277 ******* 2025-11-23 00:45:57.421835 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-23 00:45:57.421843 | orchestrator | 2025-11-23 00:45:57.421851 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-23 00:45:57.421859 | orchestrator | Sunday 23 November 2025 00:44:22 +0000 (0:00:00.528) 0:02:35.806 ******* 2025-11-23 00:45:57.421866 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.421880 | orchestrator | 2025-11-23 00:45:57.421888 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-23 00:45:57.421902 | orchestrator | Sunday 23 November 2025 00:44:22 +0000 (0:00:00.787) 0:02:36.594 ******* 2025-11-23 00:45:57.421910 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.421918 | orchestrator | 2025-11-23 00:45:57.421926 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-23 00:45:57.421934 | orchestrator | Sunday 23 November 2025 00:44:23 +0000 (0:00:00.515) 0:02:37.109 ******* 2025-11-23 00:45:57.421942 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-23 00:45:57.421949 | orchestrator | 2025-11-23 00:45:57.421957 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-23 00:45:57.421965 | orchestrator | Sunday 23 November 2025 00:44:24 +0000 (0:00:01.579) 0:02:38.688 ******* 2025-11-23 00:45:57.421973 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-23 00:45:57.421981 | orchestrator | 2025-11-23 00:45:57.421988 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-23 00:45:57.422006 | orchestrator | Sunday 23 November 2025 00:44:25 +0000 (0:00:00.819) 0:02:39.508 ******* 2025-11-23 00:45:57.422049 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.422060 | orchestrator | 2025-11-23 00:45:57.422068 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-23 00:45:57.422076 | orchestrator | Sunday 23 November 2025 00:44:26 +0000 (0:00:00.395) 0:02:39.904 ******* 2025-11-23 00:45:57.422083 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.422091 | orchestrator | 2025-11-23 00:45:57.422099 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-11-23 00:45:57.422107 | orchestrator | 2025-11-23 00:45:57.422114 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-11-23 00:45:57.422122 | orchestrator | Sunday 23 November 2025 00:44:26 +0000 (0:00:00.502) 0:02:40.407 ******* 2025-11-23 00:45:57.422130 | orchestrator | ok: [testbed-manager] 2025-11-23 00:45:57.422137 | orchestrator | 2025-11-23 00:45:57.422145 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-11-23 00:45:57.422153 | orchestrator | Sunday 23 November 2025 00:44:26 +0000 (0:00:00.116) 0:02:40.523 ******* 2025-11-23 00:45:57.422160 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-11-23 00:45:57.422168 | orchestrator | 2025-11-23 00:45:57.422176 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-11-23 00:45:57.422183 | orchestrator | Sunday 23 November 2025 00:44:26 +0000 (0:00:00.168) 0:02:40.692 ******* 2025-11-23 00:45:57.422191 | orchestrator | ok: [testbed-manager] 2025-11-23 00:45:57.422199 | orchestrator | 2025-11-23 00:45:57.422207 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-11-23 00:45:57.422214 | orchestrator | Sunday 23 November 2025 00:44:27 +0000 (0:00:00.635) 0:02:41.327 ******* 2025-11-23 00:45:57.422227 | orchestrator | ok: [testbed-manager] 2025-11-23 00:45:57.422235 | orchestrator | 2025-11-23 00:45:57.422243 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-11-23 00:45:57.422251 | orchestrator | Sunday 23 November 2025 00:44:28 +0000 (0:00:01.275) 0:02:42.603 ******* 2025-11-23 00:45:57.422259 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.422266 | orchestrator | 2025-11-23 00:45:57.422274 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-11-23 00:45:57.422282 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.651) 0:02:43.255 ******* 2025-11-23 00:45:57.422289 | orchestrator | ok: [testbed-manager] 2025-11-23 00:45:57.422297 | orchestrator | 2025-11-23 00:45:57.422305 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-11-23 00:45:57.422313 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.430) 0:02:43.685 ******* 2025-11-23 00:45:57.422320 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.422328 | orchestrator | 2025-11-23 00:45:57.422336 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-11-23 00:45:57.422350 | orchestrator | Sunday 23 November 2025 00:44:36 +0000 (0:00:06.889) 0:02:50.574 ******* 2025-11-23 00:45:57.422357 | orchestrator | changed: [testbed-manager] 2025-11-23 00:45:57.422365 | orchestrator | 2025-11-23 00:45:57.422373 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-11-23 00:45:57.422381 | orchestrator | Sunday 23 November 2025 00:44:50 +0000 (0:00:13.490) 0:03:04.064 ******* 2025-11-23 00:45:57.422389 | orchestrator | ok: [testbed-manager] 2025-11-23 00:45:57.422397 | orchestrator | 2025-11-23 00:45:57.422404 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-11-23 00:45:57.422412 | orchestrator | 2025-11-23 00:45:57.422420 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-11-23 00:45:57.422428 | orchestrator | Sunday 23 November 2025 00:44:50 +0000 (0:00:00.530) 0:03:04.595 ******* 2025-11-23 00:45:57.422436 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:45:57.422444 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:45:57.422451 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:45:57.422459 | orchestrator | 2025-11-23 00:45:57.422467 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-11-23 00:45:57.422475 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.317) 0:03:04.912 ******* 2025-11-23 00:45:57.422483 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.422491 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:45:57.422499 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:45:57.422506 | orchestrator | 2025-11-23 00:45:57.422514 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-11-23 00:45:57.422522 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.364) 0:03:05.277 ******* 2025-11-23 00:45:57.422530 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-11-23 00:45:57.422538 | orchestrator | 2025-11-23 00:45:57.422545 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-11-23 00:45:57.422553 | orchestrator | Sunday 23 November 2025 00:44:52 +0000 (0:00:00.997) 0:03:06.274 ******* 2025-11-23 00:45:57.422561 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:45:57.422569 | orchestrator | 2025-11-23 00:45:57.422577 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-11-23 00:45:57.422599 | orchestrator | Sunday 23 November 2025 00:44:53 +0000 (0:00:00.793) 0:03:07.068 ******* 2025-11-23 00:45:57.422608 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.422616 | orchestrator | 2025-11-23 00:45:57.422623 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-11-23 00:45:57.422631 | orchestrator | Sunday 23 November 2025 00:44:53 +0000 (0:00:00.326) 0:03:07.394 ******* 2025-11-23 00:45:57.422639 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:45:57.422646 | orchestrator | 2025-11-23 00:45:57.422654 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-11-23 00:45:57.422662 | orchestrator | Sunday 23 November 2025 00:44:54 +0000 (0:00:01.229) 0:03:08.623 ******* 2025-11-23 00:45:57.422669 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.422677 | orchestrator | 2025-11-23 00:45:57.422685 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-11-23 00:45:57.422692 | orchestrator | Sunday 23 November 2025 00:44:55 +0000 (0:00:00.304) 0:03:08.927 ******* 2025-11-23 00:45:57.422700 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.422708 | orchestrator | 2025-11-23 00:45:57.422715 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-11-23 00:45:57.422723 | orchestrator | Sunday 23 November 2025 00:44:55 +0000 (0:00:00.269) 0:03:09.197 ******* 2025-11-23 00:45:57.422731 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.422739 | orchestrator | 2025-11-23 00:45:57.422746 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-11-23 00:45:57.422754 | orchestrator | Sunday 23 November 2025 00:44:55 +0000 (0:00:00.178) 0:03:09.376 ******* 2025-11-23 00:45:57.422766 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:45:57.422774 | orchestrator | 2025-11-23 00:45:57.422782 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-11-23 00:45:57.422789 | orchestrator | Sunday 23 November 2025 00:44:55 +0000 (0:00:00.082) 0:03:09.458 ******* 2025-11-23 00:45:57.422797 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-23 00:45:57.422805 | orchestrator | 2025-11-23 00:45:57.422812 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-11-23 00:45:57.422820 | orchestrator | Sunday 23 November 2025 00:45:00 +0000 (0:00:04.529) 0:03:13.987 ******* 2025-11-23 00:45:57.422828 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-11-23 00:45:57.422835 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-11-23 00:45:57.422843 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-11-23 00:45:57.422851 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-11-23 00:45:57.422863 | orchestrator | ok: [testbed-node-0 -> lo2025-11-23 00:45:57 | INFO  | Task ddd9a6d1-b4f1-4c88-81f4-4ea0ff50fd12 is in state SUCCESS 2025-11-23 00:45:57.422873 | orchestrator | 2025-11-23 00:45:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:45:57.422880 | orchestrator | calhost] => (item=deployment/hubble-ui) 2025-11-23 00:45:57.422888 | orchestrator | 2025-11-23 00:45:57.422896 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-11-23 00:45:57.422903 | orchestrator | Sunday 23 November 2025 00:45:52 +0000 (0:00:52.591) 0:04:06.579 ******* 2025-11-23 00:45:57.422911 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:45:57.422919 | orchestrator | 2025-11-23 00:45:57.422926 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-11-23 00:45:57.422934 | orchestrator | Sunday 23 November 2025 00:45:54 +0000 (0:00:01.346) 0:04:07.925 ******* 2025-11-23 00:45:57.422942 | orchestrator | fatal: [testbed-node-0 -> localhost]: FAILED! => {"changed": false, "checksum": "e067333911ec303b1abbababa17374a0629c5a29", "msg": "Destination directory /tmp/k3s does not exist"} 2025-11-23 00:45:57.422950 | orchestrator | 2025-11-23 00:45:57.422958 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:45:57.422966 | orchestrator | testbed-manager : ok=18  changed=10  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:45:57.422974 | orchestrator | testbed-node-0 : ok=43  changed=20  unreachable=0 failed=1  skipped=24  rescued=0 ignored=0 2025-11-23 00:45:57.422982 | orchestrator | testbed-node-1 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-23 00:45:57.422990 | orchestrator | testbed-node-2 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-23 00:45:57.422998 | orchestrator | testbed-node-3 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-23 00:45:57.423006 | orchestrator | testbed-node-4 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-23 00:45:57.423013 | orchestrator | testbed-node-5 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-23 00:45:57.423021 | orchestrator | 2025-11-23 00:45:57.423029 | orchestrator | 2025-11-23 00:45:57.423037 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:45:57.423044 | orchestrator | Sunday 23 November 2025 00:45:55 +0000 (0:00:01.481) 0:04:09.407 ******* 2025-11-23 00:45:57.423063 | orchestrator | =============================================================================== 2025-11-23 00:45:57.423071 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.18s 2025-11-23 00:45:57.423078 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 52.59s 2025-11-23 00:45:57.423086 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.74s 2025-11-23 00:45:57.423094 | orchestrator | kubectl : Install required packages ------------------------------------ 13.49s 2025-11-23 00:45:57.423101 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.80s 2025-11-23 00:45:57.423109 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.89s 2025-11-23 00:45:57.423117 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.24s 2025-11-23 00:45:57.423125 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.53s 2025-11-23 00:45:57.423132 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.21s 2025-11-23 00:45:57.423140 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.04s 2025-11-23 00:45:57.423148 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.13s 2025-11-23 00:45:57.423156 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.06s 2025-11-23 00:45:57.423163 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.94s 2025-11-23 00:45:57.423171 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.84s 2025-11-23 00:45:57.423179 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.80s 2025-11-23 00:45:57.423186 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.72s 2025-11-23 00:45:57.423194 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.67s 2025-11-23 00:45:57.423202 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.66s 2025-11-23 00:45:57.423209 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.58s 2025-11-23 00:45:57.423217 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.48s 2025-11-23 00:46:00.459683 | orchestrator | 2025-11-23 00:46:00 | INFO  | Task f5f0f9ed-ef34-4b8a-9009-d66ea267b3de is in state STARTED 2025-11-23 00:46:00.459790 | orchestrator | 2025-11-23 00:46:00 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:00.459816 | orchestrator | 2025-11-23 00:46:00 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:00.459831 | orchestrator | 2025-11-23 00:46:00 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:00.459842 | orchestrator | 2025-11-23 00:46:00 | INFO  | Task e38cebc4-f018-4c6f-9646-79fd0365d193 is in state STARTED 2025-11-23 00:46:00.459853 | orchestrator | 2025-11-23 00:46:00 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:00.459865 | orchestrator | 2025-11-23 00:46:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:03.479258 | orchestrator | 2025-11-23 00:46:03 | INFO  | Task f5f0f9ed-ef34-4b8a-9009-d66ea267b3de is in state SUCCESS 2025-11-23 00:46:03.479439 | orchestrator | 2025-11-23 00:46:03 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:03.479454 | orchestrator | 2025-11-23 00:46:03 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:03.479472 | orchestrator | 2025-11-23 00:46:03 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:03.482206 | orchestrator | 2025-11-23 00:46:03 | INFO  | Task e38cebc4-f018-4c6f-9646-79fd0365d193 is in state STARTED 2025-11-23 00:46:03.482266 | orchestrator | 2025-11-23 00:46:03 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:03.482284 | orchestrator | 2025-11-23 00:46:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:06.512439 | orchestrator | 2025-11-23 00:46:06 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:06.513049 | orchestrator | 2025-11-23 00:46:06 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:06.514393 | orchestrator | 2025-11-23 00:46:06 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:06.514952 | orchestrator | 2025-11-23 00:46:06 | INFO  | Task e38cebc4-f018-4c6f-9646-79fd0365d193 is in state SUCCESS 2025-11-23 00:46:06.516079 | orchestrator | 2025-11-23 00:46:06 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:06.516130 | orchestrator | 2025-11-23 00:46:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:09.557018 | orchestrator | 2025-11-23 00:46:09 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:09.558153 | orchestrator | 2025-11-23 00:46:09 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:09.560230 | orchestrator | 2025-11-23 00:46:09 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:09.562426 | orchestrator | 2025-11-23 00:46:09 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:09.562454 | orchestrator | 2025-11-23 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:12.589481 | orchestrator | 2025-11-23 00:46:12 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:12.589890 | orchestrator | 2025-11-23 00:46:12 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:12.590469 | orchestrator | 2025-11-23 00:46:12 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:12.591011 | orchestrator | 2025-11-23 00:46:12 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:12.591043 | orchestrator | 2025-11-23 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:15.619978 | orchestrator | 2025-11-23 00:46:15 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:15.622071 | orchestrator | 2025-11-23 00:46:15 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:15.623493 | orchestrator | 2025-11-23 00:46:15 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:15.627228 | orchestrator | 2025-11-23 00:46:15 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:15.627258 | orchestrator | 2025-11-23 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:18.662137 | orchestrator | 2025-11-23 00:46:18 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:18.664868 | orchestrator | 2025-11-23 00:46:18 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:18.666962 | orchestrator | 2025-11-23 00:46:18 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:18.669703 | orchestrator | 2025-11-23 00:46:18 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:18.670416 | orchestrator | 2025-11-23 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:21.710304 | orchestrator | 2025-11-23 00:46:21 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:21.710832 | orchestrator | 2025-11-23 00:46:21 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:21.712221 | orchestrator | 2025-11-23 00:46:21 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:21.713191 | orchestrator | 2025-11-23 00:46:21 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:21.713639 | orchestrator | 2025-11-23 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:24.757019 | orchestrator | 2025-11-23 00:46:24 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:24.759878 | orchestrator | 2025-11-23 00:46:24 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:24.761267 | orchestrator | 2025-11-23 00:46:24 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:24.763782 | orchestrator | 2025-11-23 00:46:24 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:24.763824 | orchestrator | 2025-11-23 00:46:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:27.795814 | orchestrator | 2025-11-23 00:46:27 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:27.797714 | orchestrator | 2025-11-23 00:46:27 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:27.800188 | orchestrator | 2025-11-23 00:46:27 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:27.802289 | orchestrator | 2025-11-23 00:46:27 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:27.802362 | orchestrator | 2025-11-23 00:46:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:30.837706 | orchestrator | 2025-11-23 00:46:30 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:30.837988 | orchestrator | 2025-11-23 00:46:30 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:30.838389 | orchestrator | 2025-11-23 00:46:30 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:30.839248 | orchestrator | 2025-11-23 00:46:30 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:30.839289 | orchestrator | 2025-11-23 00:46:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:33.894495 | orchestrator | 2025-11-23 00:46:33 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:33.894623 | orchestrator | 2025-11-23 00:46:33 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:33.895123 | orchestrator | 2025-11-23 00:46:33 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:33.896982 | orchestrator | 2025-11-23 00:46:33 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:33.897004 | orchestrator | 2025-11-23 00:46:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:36.988560 | orchestrator | 2025-11-23 00:46:36 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:36.989952 | orchestrator | 2025-11-23 00:46:36 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:36.991427 | orchestrator | 2025-11-23 00:46:36 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:36.992837 | orchestrator | 2025-11-23 00:46:36 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:36.992870 | orchestrator | 2025-11-23 00:46:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:40.024539 | orchestrator | 2025-11-23 00:46:40 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:40.024697 | orchestrator | 2025-11-23 00:46:40 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:40.025286 | orchestrator | 2025-11-23 00:46:40 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:40.026108 | orchestrator | 2025-11-23 00:46:40 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:40.026148 | orchestrator | 2025-11-23 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:43.061248 | orchestrator | 2025-11-23 00:46:43 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:43.061532 | orchestrator | 2025-11-23 00:46:43 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:43.062388 | orchestrator | 2025-11-23 00:46:43 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:43.063357 | orchestrator | 2025-11-23 00:46:43 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:43.063638 | orchestrator | 2025-11-23 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:46.096157 | orchestrator | 2025-11-23 00:46:46 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:46.096855 | orchestrator | 2025-11-23 00:46:46 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:46.097639 | orchestrator | 2025-11-23 00:46:46 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:46.099506 | orchestrator | 2025-11-23 00:46:46 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:46.099533 | orchestrator | 2025-11-23 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:49.134486 | orchestrator | 2025-11-23 00:46:49 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:49.136387 | orchestrator | 2025-11-23 00:46:49 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:49.138130 | orchestrator | 2025-11-23 00:46:49 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:49.139608 | orchestrator | 2025-11-23 00:46:49 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:49.139698 | orchestrator | 2025-11-23 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:52.181410 | orchestrator | 2025-11-23 00:46:52 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:52.182118 | orchestrator | 2025-11-23 00:46:52 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:52.184207 | orchestrator | 2025-11-23 00:46:52 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:52.185996 | orchestrator | 2025-11-23 00:46:52 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:52.186119 | orchestrator | 2025-11-23 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:55.247672 | orchestrator | 2025-11-23 00:46:55 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:55.249628 | orchestrator | 2025-11-23 00:46:55 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:55.250881 | orchestrator | 2025-11-23 00:46:55 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:55.252777 | orchestrator | 2025-11-23 00:46:55 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:55.252819 | orchestrator | 2025-11-23 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:46:58.358958 | orchestrator | 2025-11-23 00:46:58 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:46:58.359646 | orchestrator | 2025-11-23 00:46:58 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:46:58.360293 | orchestrator | 2025-11-23 00:46:58 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:46:58.361315 | orchestrator | 2025-11-23 00:46:58 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:46:58.361816 | orchestrator | 2025-11-23 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:01.392091 | orchestrator | 2025-11-23 00:47:01 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:01.394485 | orchestrator | 2025-11-23 00:47:01 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:01.396247 | orchestrator | 2025-11-23 00:47:01 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:01.398520 | orchestrator | 2025-11-23 00:47:01 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:47:01.398632 | orchestrator | 2025-11-23 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:04.431741 | orchestrator | 2025-11-23 00:47:04 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:04.432604 | orchestrator | 2025-11-23 00:47:04 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:04.433638 | orchestrator | 2025-11-23 00:47:04 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:04.434367 | orchestrator | 2025-11-23 00:47:04 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state STARTED 2025-11-23 00:47:04.434396 | orchestrator | 2025-11-23 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:07.465142 | orchestrator | 2025-11-23 00:47:07 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:07.466766 | orchestrator | 2025-11-23 00:47:07 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:07.467230 | orchestrator | 2025-11-23 00:47:07 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:07.469704 | orchestrator | 2025-11-23 00:47:07 | INFO  | Task e1ff760a-4425-4328-ac1b-929616d372d8 is in state SUCCESS 2025-11-23 00:47:07.469747 | orchestrator | 2025-11-23 00:47:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:07.470846 | orchestrator | 2025-11-23 00:47:07.470902 | orchestrator | 2025-11-23 00:47:07.470912 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-11-23 00:47:07.470920 | orchestrator | 2025-11-23 00:47:07.470928 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-23 00:47:07.470935 | orchestrator | Sunday 23 November 2025 00:45:59 +0000 (0:00:00.122) 0:00:00.122 ******* 2025-11-23 00:47:07.470943 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-23 00:47:07.470950 | orchestrator | 2025-11-23 00:47:07.470957 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-23 00:47:07.470963 | orchestrator | Sunday 23 November 2025 00:46:00 +0000 (0:00:00.698) 0:00:00.820 ******* 2025-11-23 00:47:07.470990 | orchestrator | changed: [testbed-manager] 2025-11-23 00:47:07.470997 | orchestrator | 2025-11-23 00:47:07.471004 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-11-23 00:47:07.471011 | orchestrator | Sunday 23 November 2025 00:46:01 +0000 (0:00:01.170) 0:00:01.991 ******* 2025-11-23 00:47:07.471018 | orchestrator | changed: [testbed-manager] 2025-11-23 00:47:07.471024 | orchestrator | 2025-11-23 00:47:07.471031 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:47:07.471051 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:47:07.471060 | orchestrator | 2025-11-23 00:47:07.471066 | orchestrator | 2025-11-23 00:47:07.471073 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:47:07.471080 | orchestrator | Sunday 23 November 2025 00:46:02 +0000 (0:00:00.490) 0:00:02.481 ******* 2025-11-23 00:47:07.471086 | orchestrator | =============================================================================== 2025-11-23 00:47:07.471093 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2025-11-23 00:47:07.471100 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-11-23 00:47:07.471117 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.49s 2025-11-23 00:47:07.471124 | orchestrator | 2025-11-23 00:47:07.471131 | orchestrator | 2025-11-23 00:47:07.471137 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-23 00:47:07.471144 | orchestrator | 2025-11-23 00:47:07.471151 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-23 00:47:07.471158 | orchestrator | Sunday 23 November 2025 00:45:59 +0000 (0:00:00.165) 0:00:00.165 ******* 2025-11-23 00:47:07.471164 | orchestrator | ok: [testbed-manager] 2025-11-23 00:47:07.471172 | orchestrator | 2025-11-23 00:47:07.471179 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-23 00:47:07.471185 | orchestrator | Sunday 23 November 2025 00:46:00 +0000 (0:00:00.509) 0:00:00.674 ******* 2025-11-23 00:47:07.471192 | orchestrator | ok: [testbed-manager] 2025-11-23 00:47:07.471199 | orchestrator | 2025-11-23 00:47:07.471206 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-23 00:47:07.471213 | orchestrator | Sunday 23 November 2025 00:46:00 +0000 (0:00:00.526) 0:00:01.201 ******* 2025-11-23 00:47:07.471219 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-23 00:47:07.471226 | orchestrator | 2025-11-23 00:47:07.471233 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-23 00:47:07.471240 | orchestrator | Sunday 23 November 2025 00:46:01 +0000 (0:00:00.744) 0:00:01.946 ******* 2025-11-23 00:47:07.471246 | orchestrator | changed: [testbed-manager] 2025-11-23 00:47:07.471253 | orchestrator | 2025-11-23 00:47:07.471260 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-23 00:47:07.471266 | orchestrator | Sunday 23 November 2025 00:46:02 +0000 (0:00:01.395) 0:00:03.342 ******* 2025-11-23 00:47:07.471273 | orchestrator | changed: [testbed-manager] 2025-11-23 00:47:07.471279 | orchestrator | 2025-11-23 00:47:07.471286 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-23 00:47:07.471293 | orchestrator | Sunday 23 November 2025 00:46:03 +0000 (0:00:00.501) 0:00:03.843 ******* 2025-11-23 00:47:07.471299 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-23 00:47:07.471306 | orchestrator | 2025-11-23 00:47:07.471313 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-23 00:47:07.471319 | orchestrator | Sunday 23 November 2025 00:46:04 +0000 (0:00:01.503) 0:00:05.346 ******* 2025-11-23 00:47:07.471326 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-23 00:47:07.471333 | orchestrator | 2025-11-23 00:47:07.471340 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-23 00:47:07.471346 | orchestrator | Sunday 23 November 2025 00:46:05 +0000 (0:00:00.752) 0:00:06.099 ******* 2025-11-23 00:47:07.471358 | orchestrator | ok: [testbed-manager] 2025-11-23 00:47:07.471365 | orchestrator | 2025-11-23 00:47:07.471372 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-23 00:47:07.471378 | orchestrator | Sunday 23 November 2025 00:46:05 +0000 (0:00:00.367) 0:00:06.467 ******* 2025-11-23 00:47:07.471385 | orchestrator | ok: [testbed-manager] 2025-11-23 00:47:07.471392 | orchestrator | 2025-11-23 00:47:07.471398 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:47:07.471406 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:47:07.471412 | orchestrator | 2025-11-23 00:47:07.471419 | orchestrator | 2025-11-23 00:47:07.471428 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:47:07.471435 | orchestrator | Sunday 23 November 2025 00:46:06 +0000 (0:00:00.296) 0:00:06.763 ******* 2025-11-23 00:47:07.471443 | orchestrator | =============================================================================== 2025-11-23 00:47:07.471450 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.50s 2025-11-23 00:47:07.471458 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.40s 2025-11-23 00:47:07.471466 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.75s 2025-11-23 00:47:07.471485 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2025-11-23 00:47:07.471493 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-11-23 00:47:07.471501 | orchestrator | Get home directory of operator user ------------------------------------- 0.51s 2025-11-23 00:47:07.471508 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.50s 2025-11-23 00:47:07.471516 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-11-23 00:47:07.471524 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2025-11-23 00:47:07.471531 | orchestrator | 2025-11-23 00:47:07.471539 | orchestrator | 2025-11-23 00:47:07.471546 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-11-23 00:47:07.471606 | orchestrator | 2025-11-23 00:47:07.471615 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-23 00:47:07.471622 | orchestrator | Sunday 23 November 2025 00:44:46 +0000 (0:00:00.118) 0:00:00.118 ******* 2025-11-23 00:47:07.471630 | orchestrator | ok: [localhost] => { 2025-11-23 00:47:07.471644 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-11-23 00:47:07.471652 | orchestrator | } 2025-11-23 00:47:07.471660 | orchestrator | 2025-11-23 00:47:07.471667 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-11-23 00:47:07.471675 | orchestrator | Sunday 23 November 2025 00:44:46 +0000 (0:00:00.081) 0:00:00.200 ******* 2025-11-23 00:47:07.471683 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-11-23 00:47:07.471692 | orchestrator | ...ignoring 2025-11-23 00:47:07.471701 | orchestrator | 2025-11-23 00:47:07.471708 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-11-23 00:47:07.471716 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:04.360) 0:00:04.561 ******* 2025-11-23 00:47:07.471724 | orchestrator | skipping: [localhost] 2025-11-23 00:47:07.471732 | orchestrator | 2025-11-23 00:47:07.471739 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-11-23 00:47:07.471747 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.085) 0:00:04.647 ******* 2025-11-23 00:47:07.471755 | orchestrator | ok: [localhost] 2025-11-23 00:47:07.471762 | orchestrator | 2025-11-23 00:47:07.471770 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:47:07.471778 | orchestrator | 2025-11-23 00:47:07.471785 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:47:07.471796 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.147) 0:00:04.795 ******* 2025-11-23 00:47:07.471803 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:47:07.471810 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:47:07.471817 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:47:07.471823 | orchestrator | 2025-11-23 00:47:07.471830 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:47:07.471837 | orchestrator | Sunday 23 November 2025 00:44:52 +0000 (0:00:00.484) 0:00:05.279 ******* 2025-11-23 00:47:07.471843 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-11-23 00:47:07.471850 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-11-23 00:47:07.471857 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-11-23 00:47:07.471864 | orchestrator | 2025-11-23 00:47:07.471870 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-11-23 00:47:07.471877 | orchestrator | 2025-11-23 00:47:07.471884 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-23 00:47:07.471890 | orchestrator | Sunday 23 November 2025 00:44:53 +0000 (0:00:01.174) 0:00:06.454 ******* 2025-11-23 00:47:07.471897 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:47:07.471904 | orchestrator | 2025-11-23 00:47:07.471911 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-23 00:47:07.471917 | orchestrator | Sunday 23 November 2025 00:44:54 +0000 (0:00:01.257) 0:00:07.712 ******* 2025-11-23 00:47:07.471924 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:47:07.471931 | orchestrator | 2025-11-23 00:47:07.471937 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-11-23 00:47:07.471944 | orchestrator | Sunday 23 November 2025 00:44:55 +0000 (0:00:01.375) 0:00:09.088 ******* 2025-11-23 00:47:07.471950 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.471957 | orchestrator | 2025-11-23 00:47:07.471964 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-11-23 00:47:07.471970 | orchestrator | Sunday 23 November 2025 00:44:56 +0000 (0:00:00.443) 0:00:09.532 ******* 2025-11-23 00:47:07.471977 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.471984 | orchestrator | 2025-11-23 00:47:07.471990 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-11-23 00:47:07.471997 | orchestrator | Sunday 23 November 2025 00:44:56 +0000 (0:00:00.404) 0:00:09.937 ******* 2025-11-23 00:47:07.472003 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.472010 | orchestrator | 2025-11-23 00:47:07.472017 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-11-23 00:47:07.472023 | orchestrator | Sunday 23 November 2025 00:44:56 +0000 (0:00:00.297) 0:00:10.234 ******* 2025-11-23 00:47:07.472030 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.472037 | orchestrator | 2025-11-23 00:47:07.472043 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-23 00:47:07.472050 | orchestrator | Sunday 23 November 2025 00:44:57 +0000 (0:00:00.580) 0:00:10.815 ******* 2025-11-23 00:47:07.472057 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:47:07.472063 | orchestrator | 2025-11-23 00:47:07.472070 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-23 00:47:07.472082 | orchestrator | Sunday 23 November 2025 00:44:58 +0000 (0:00:00.677) 0:00:11.492 ******* 2025-11-23 00:47:07.472089 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:47:07.472095 | orchestrator | 2025-11-23 00:47:07.472102 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-11-23 00:47:07.472109 | orchestrator | Sunday 23 November 2025 00:44:59 +0000 (0:00:01.266) 0:00:12.759 ******* 2025-11-23 00:47:07.472115 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.472122 | orchestrator | 2025-11-23 00:47:07.472129 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-11-23 00:47:07.472141 | orchestrator | Sunday 23 November 2025 00:44:59 +0000 (0:00:00.432) 0:00:13.191 ******* 2025-11-23 00:47:07.472148 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.472154 | orchestrator | 2025-11-23 00:47:07.472161 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-11-23 00:47:07.472167 | orchestrator | Sunday 23 November 2025 00:45:00 +0000 (0:00:00.337) 0:00:13.528 ******* 2025-11-23 00:47:07.472181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472207 | orchestrator | 2025-11-23 00:47:07.472214 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-11-23 00:47:07.472221 | orchestrator | Sunday 23 November 2025 00:45:01 +0000 (0:00:00.942) 0:00:14.471 ******* 2025-11-23 00:47:07.472234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472334 | orchestrator | 2025-11-23 00:47:07.472341 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-11-23 00:47:07.472348 | orchestrator | Sunday 23 November 2025 00:45:03 +0000 (0:00:02.721) 0:00:17.192 ******* 2025-11-23 00:47:07.472354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-23 00:47:07.472361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-23 00:47:07.472368 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-23 00:47:07.472374 | orchestrator | 2025-11-23 00:47:07.472381 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-11-23 00:47:07.472388 | orchestrator | Sunday 23 November 2025 00:45:06 +0000 (0:00:02.182) 0:00:19.375 ******* 2025-11-23 00:47:07.472394 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-23 00:47:07.472406 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-23 00:47:07.472413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-23 00:47:07.472420 | orchestrator | 2025-11-23 00:47:07.472426 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-11-23 00:47:07.472438 | orchestrator | Sunday 23 November 2025 00:45:08 +0000 (0:00:02.337) 0:00:21.712 ******* 2025-11-23 00:47:07.472445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-23 00:47:07.472452 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-23 00:47:07.472459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-23 00:47:07.472465 | orchestrator | 2025-11-23 00:47:07.472472 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-11-23 00:47:07.472478 | orchestrator | Sunday 23 November 2025 00:45:09 +0000 (0:00:01.243) 0:00:22.956 ******* 2025-11-23 00:47:07.472485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-23 00:47:07.472491 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-23 00:47:07.472498 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-23 00:47:07.472505 | orchestrator | 2025-11-23 00:47:07.472515 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-11-23 00:47:07.472522 | orchestrator | Sunday 23 November 2025 00:45:11 +0000 (0:00:01.418) 0:00:24.374 ******* 2025-11-23 00:47:07.472528 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-23 00:47:07.472535 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-23 00:47:07.472541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-23 00:47:07.472560 | orchestrator | 2025-11-23 00:47:07.472567 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-11-23 00:47:07.472574 | orchestrator | Sunday 23 November 2025 00:45:12 +0000 (0:00:01.269) 0:00:25.644 ******* 2025-11-23 00:47:07.472581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-23 00:47:07.472587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-23 00:47:07.472594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-23 00:47:07.472600 | orchestrator | 2025-11-23 00:47:07.472607 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-23 00:47:07.472614 | orchestrator | Sunday 23 November 2025 00:45:13 +0000 (0:00:01.290) 0:00:26.934 ******* 2025-11-23 00:47:07.472620 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.472627 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:47:07.472634 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:47:07.472640 | orchestrator | 2025-11-23 00:47:07.472647 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-11-23 00:47:07.472653 | orchestrator | Sunday 23 November 2025 00:45:14 +0000 (0:00:00.454) 0:00:27.388 ******* 2025-11-23 00:47:07.472661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:47:07.472698 | orchestrator | 2025-11-23 00:47:07.472704 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-11-23 00:47:07.472711 | orchestrator | Sunday 23 November 2025 00:45:15 +0000 (0:00:01.360) 0:00:28.749 ******* 2025-11-23 00:47:07.472717 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:47:07.472724 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:47:07.472731 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:47:07.472737 | orchestrator | 2025-11-23 00:47:07.472744 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-11-23 00:47:07.472750 | orchestrator | Sunday 23 November 2025 00:45:16 +0000 (0:00:00.985) 0:00:29.735 ******* 2025-11-23 00:47:07.472757 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:47:07.472764 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:47:07.472770 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:47:07.472777 | orchestrator | 2025-11-23 00:47:07.472783 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-11-23 00:47:07.472790 | orchestrator | Sunday 23 November 2025 00:45:23 +0000 (0:00:06.897) 0:00:36.632 ******* 2025-11-23 00:47:07.472797 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:47:07.472803 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:47:07.472810 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:47:07.472817 | orchestrator | 2025-11-23 00:47:07.472827 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-23 00:47:07.472834 | orchestrator | 2025-11-23 00:47:07.472841 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-23 00:47:07.472847 | orchestrator | Sunday 23 November 2025 00:45:23 +0000 (0:00:00.319) 0:00:36.952 ******* 2025-11-23 00:47:07.472854 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:47:07.472861 | orchestrator | 2025-11-23 00:47:07.472867 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-23 00:47:07.472874 | orchestrator | Sunday 23 November 2025 00:45:24 +0000 (0:00:00.654) 0:00:37.607 ******* 2025-11-23 00:47:07.472881 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:47:07.472887 | orchestrator | 2025-11-23 00:47:07.472894 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-23 00:47:07.472900 | orchestrator | Sunday 23 November 2025 00:45:24 +0000 (0:00:00.223) 0:00:37.830 ******* 2025-11-23 00:47:07.472907 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:47:07.472913 | orchestrator | 2025-11-23 00:47:07.472920 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-23 00:47:07.472927 | orchestrator | Sunday 23 November 2025 00:45:26 +0000 (0:00:01.830) 0:00:39.661 ******* 2025-11-23 00:47:07.472933 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:47:07.472940 | orchestrator | 2025-11-23 00:47:07.472946 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-23 00:47:07.472953 | orchestrator | 2025-11-23 00:47:07.472960 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-23 00:47:07.472966 | orchestrator | Sunday 23 November 2025 00:46:23 +0000 (0:00:57.308) 0:01:36.969 ******* 2025-11-23 00:47:07.472973 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:47:07.472979 | orchestrator | 2025-11-23 00:47:07.472986 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-23 00:47:07.472992 | orchestrator | Sunday 23 November 2025 00:46:24 +0000 (0:00:00.580) 0:01:37.549 ******* 2025-11-23 00:47:07.472999 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:47:07.473006 | orchestrator | 2025-11-23 00:47:07.473012 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-23 00:47:07.473019 | orchestrator | Sunday 23 November 2025 00:46:24 +0000 (0:00:00.243) 0:01:37.793 ******* 2025-11-23 00:47:07.473026 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:47:07.473032 | orchestrator | 2025-11-23 00:47:07.473039 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-23 00:47:07.473045 | orchestrator | Sunday 23 November 2025 00:46:26 +0000 (0:00:02.178) 0:01:39.972 ******* 2025-11-23 00:47:07.473052 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:47:07.473058 | orchestrator | 2025-11-23 00:47:07.473065 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-23 00:47:07.473071 | orchestrator | 2025-11-23 00:47:07.473078 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-23 00:47:07.473085 | orchestrator | Sunday 23 November 2025 00:46:42 +0000 (0:00:16.132) 0:01:56.105 ******* 2025-11-23 00:47:07.473091 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:47:07.473098 | orchestrator | 2025-11-23 00:47:07.473108 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-23 00:47:07.473115 | orchestrator | Sunday 23 November 2025 00:46:43 +0000 (0:00:00.637) 0:01:56.742 ******* 2025-11-23 00:47:07.473121 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:47:07.473128 | orchestrator | 2025-11-23 00:47:07.473134 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-23 00:47:07.473141 | orchestrator | Sunday 23 November 2025 00:46:43 +0000 (0:00:00.234) 0:01:56.977 ******* 2025-11-23 00:47:07.473148 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:47:07.473154 | orchestrator | 2025-11-23 00:47:07.473161 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-23 00:47:07.473167 | orchestrator | Sunday 23 November 2025 00:46:45 +0000 (0:00:02.201) 0:01:59.179 ******* 2025-11-23 00:47:07.473178 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:47:07.473184 | orchestrator | 2025-11-23 00:47:07.473191 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-11-23 00:47:07.473198 | orchestrator | 2025-11-23 00:47:07.473204 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-11-23 00:47:07.473211 | orchestrator | Sunday 23 November 2025 00:47:02 +0000 (0:00:16.726) 0:02:15.905 ******* 2025-11-23 00:47:07.473221 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:47:07.473227 | orchestrator | 2025-11-23 00:47:07.473234 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-11-23 00:47:07.473241 | orchestrator | Sunday 23 November 2025 00:47:03 +0000 (0:00:00.678) 0:02:16.584 ******* 2025-11-23 00:47:07.473247 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-23 00:47:07.473254 | orchestrator | enable_outward_rabbitmq_True 2025-11-23 00:47:07.473260 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-23 00:47:07.473267 | orchestrator | outward_rabbitmq_restart 2025-11-23 00:47:07.473274 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:47:07.473281 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:47:07.473287 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:47:07.473294 | orchestrator | 2025-11-23 00:47:07.473300 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-11-23 00:47:07.473307 | orchestrator | skipping: no hosts matched 2025-11-23 00:47:07.473314 | orchestrator | 2025-11-23 00:47:07.473320 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-11-23 00:47:07.473327 | orchestrator | skipping: no hosts matched 2025-11-23 00:47:07.473333 | orchestrator | 2025-11-23 00:47:07.473340 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-11-23 00:47:07.473347 | orchestrator | skipping: no hosts matched 2025-11-23 00:47:07.473353 | orchestrator | 2025-11-23 00:47:07.473360 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:47:07.473367 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-23 00:47:07.473374 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-23 00:47:07.473381 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:47:07.473388 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:47:07.473394 | orchestrator | 2025-11-23 00:47:07.473401 | orchestrator | 2025-11-23 00:47:07.473408 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:47:07.473414 | orchestrator | Sunday 23 November 2025 00:47:05 +0000 (0:00:02.474) 0:02:19.058 ******* 2025-11-23 00:47:07.473421 | orchestrator | =============================================================================== 2025-11-23 00:47:07.473428 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 90.17s 2025-11-23 00:47:07.473434 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.90s 2025-11-23 00:47:07.473441 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.21s 2025-11-23 00:47:07.473447 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.36s 2025-11-23 00:47:07.473454 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.72s 2025-11-23 00:47:07.473460 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.47s 2025-11-23 00:47:07.473467 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.34s 2025-11-23 00:47:07.473473 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.18s 2025-11-23 00:47:07.473485 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.87s 2025-11-23 00:47:07.473492 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.42s 2025-11-23 00:47:07.473499 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.38s 2025-11-23 00:47:07.473505 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.36s 2025-11-23 00:47:07.473512 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.29s 2025-11-23 00:47:07.473518 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.27s 2025-11-23 00:47:07.473525 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.27s 2025-11-23 00:47:07.473532 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.26s 2025-11-23 00:47:07.473538 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.24s 2025-11-23 00:47:07.473562 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.17s 2025-11-23 00:47:07.473569 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.99s 2025-11-23 00:47:07.473576 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.94s 2025-11-23 00:47:10.507718 | orchestrator | 2025-11-23 00:47:10 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:10.510292 | orchestrator | 2025-11-23 00:47:10 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:10.513881 | orchestrator | 2025-11-23 00:47:10 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:10.513914 | orchestrator | 2025-11-23 00:47:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:13.544919 | orchestrator | 2025-11-23 00:47:13 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:13.546276 | orchestrator | 2025-11-23 00:47:13 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:13.547899 | orchestrator | 2025-11-23 00:47:13 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:13.547974 | orchestrator | 2025-11-23 00:47:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:16.583433 | orchestrator | 2025-11-23 00:47:16 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:16.586345 | orchestrator | 2025-11-23 00:47:16 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:16.588338 | orchestrator | 2025-11-23 00:47:16 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:16.588478 | orchestrator | 2025-11-23 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:19.628805 | orchestrator | 2025-11-23 00:47:19 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:19.630710 | orchestrator | 2025-11-23 00:47:19 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:19.632380 | orchestrator | 2025-11-23 00:47:19 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:19.632429 | orchestrator | 2025-11-23 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:22.671636 | orchestrator | 2025-11-23 00:47:22 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:22.673178 | orchestrator | 2025-11-23 00:47:22 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:22.674943 | orchestrator | 2025-11-23 00:47:22 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:22.675321 | orchestrator | 2025-11-23 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:25.708885 | orchestrator | 2025-11-23 00:47:25 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:25.708979 | orchestrator | 2025-11-23 00:47:25 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:25.710675 | orchestrator | 2025-11-23 00:47:25 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:25.710702 | orchestrator | 2025-11-23 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:28.733347 | orchestrator | 2025-11-23 00:47:28 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:28.733690 | orchestrator | 2025-11-23 00:47:28 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:28.734971 | orchestrator | 2025-11-23 00:47:28 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:28.735024 | orchestrator | 2025-11-23 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:31.774697 | orchestrator | 2025-11-23 00:47:31 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:31.776015 | orchestrator | 2025-11-23 00:47:31 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:31.777689 | orchestrator | 2025-11-23 00:47:31 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:31.777719 | orchestrator | 2025-11-23 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:34.810316 | orchestrator | 2025-11-23 00:47:34 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:34.813278 | orchestrator | 2025-11-23 00:47:34 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:34.817830 | orchestrator | 2025-11-23 00:47:34 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:34.821572 | orchestrator | 2025-11-23 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:37.850375 | orchestrator | 2025-11-23 00:47:37 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:37.851075 | orchestrator | 2025-11-23 00:47:37 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:37.852802 | orchestrator | 2025-11-23 00:47:37 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:37.853065 | orchestrator | 2025-11-23 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:40.893719 | orchestrator | 2025-11-23 00:47:40 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:40.893852 | orchestrator | 2025-11-23 00:47:40 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:40.894631 | orchestrator | 2025-11-23 00:47:40 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:40.896139 | orchestrator | 2025-11-23 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:43.929387 | orchestrator | 2025-11-23 00:47:43 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:43.929736 | orchestrator | 2025-11-23 00:47:43 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:43.931358 | orchestrator | 2025-11-23 00:47:43 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:43.931691 | orchestrator | 2025-11-23 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:46.957245 | orchestrator | 2025-11-23 00:47:46 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:46.957435 | orchestrator | 2025-11-23 00:47:46 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:46.958301 | orchestrator | 2025-11-23 00:47:46 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:46.958325 | orchestrator | 2025-11-23 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:49.992013 | orchestrator | 2025-11-23 00:47:49 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:49.992236 | orchestrator | 2025-11-23 00:47:49 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:49.993269 | orchestrator | 2025-11-23 00:47:49 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:49.993297 | orchestrator | 2025-11-23 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:53.035685 | orchestrator | 2025-11-23 00:47:53 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:53.035782 | orchestrator | 2025-11-23 00:47:53 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:53.035797 | orchestrator | 2025-11-23 00:47:53 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:53.035809 | orchestrator | 2025-11-23 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:56.064093 | orchestrator | 2025-11-23 00:47:56 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:56.064749 | orchestrator | 2025-11-23 00:47:56 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:56.067009 | orchestrator | 2025-11-23 00:47:56 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:56.067041 | orchestrator | 2025-11-23 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:47:59.097435 | orchestrator | 2025-11-23 00:47:59 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:47:59.099225 | orchestrator | 2025-11-23 00:47:59 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:47:59.101104 | orchestrator | 2025-11-23 00:47:59 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:47:59.101315 | orchestrator | 2025-11-23 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:02.139735 | orchestrator | 2025-11-23 00:48:02 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:02.139825 | orchestrator | 2025-11-23 00:48:02 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:48:02.139840 | orchestrator | 2025-11-23 00:48:02 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:02.139852 | orchestrator | 2025-11-23 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:05.156736 | orchestrator | 2025-11-23 00:48:05 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:05.156916 | orchestrator | 2025-11-23 00:48:05 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:48:05.157870 | orchestrator | 2025-11-23 00:48:05 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:05.157911 | orchestrator | 2025-11-23 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:08.207526 | orchestrator | 2025-11-23 00:48:08 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:08.208951 | orchestrator | 2025-11-23 00:48:08 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state STARTED 2025-11-23 00:48:08.210675 | orchestrator | 2025-11-23 00:48:08 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:08.211667 | orchestrator | 2025-11-23 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:11.253841 | orchestrator | 2025-11-23 00:48:11 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:11.256360 | orchestrator | 2025-11-23 00:48:11 | INFO  | Task ea594a40-004c-4fac-8cb9-7616d485abf4 is in state SUCCESS 2025-11-23 00:48:11.257698 | orchestrator | 2025-11-23 00:48:11.257853 | orchestrator | 2025-11-23 00:48:11.258154 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:48:11.258177 | orchestrator | 2025-11-23 00:48:11.259174 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:48:11.259195 | orchestrator | Sunday 23 November 2025 00:45:39 +0000 (0:00:00.149) 0:00:00.149 ******* 2025-11-23 00:48:11.259212 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:48:11.259229 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:48:11.259244 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:48:11.259259 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.259275 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.259290 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.259307 | orchestrator | 2025-11-23 00:48:11.259323 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:48:11.259340 | orchestrator | Sunday 23 November 2025 00:45:39 +0000 (0:00:00.877) 0:00:01.026 ******* 2025-11-23 00:48:11.259357 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-11-23 00:48:11.259393 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-11-23 00:48:11.259405 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-11-23 00:48:11.259414 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-11-23 00:48:11.259425 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-11-23 00:48:11.259435 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-11-23 00:48:11.259444 | orchestrator | 2025-11-23 00:48:11.259454 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-11-23 00:48:11.259464 | orchestrator | 2025-11-23 00:48:11.259474 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-11-23 00:48:11.259484 | orchestrator | Sunday 23 November 2025 00:45:41 +0000 (0:00:01.242) 0:00:02.268 ******* 2025-11-23 00:48:11.259495 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:48:11.259535 | orchestrator | 2025-11-23 00:48:11.259545 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-11-23 00:48:11.259555 | orchestrator | Sunday 23 November 2025 00:45:42 +0000 (0:00:00.869) 0:00:03.138 ******* 2025-11-23 00:48:11.259567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259657 | orchestrator | 2025-11-23 00:48:11.259681 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-11-23 00:48:11.259691 | orchestrator | Sunday 23 November 2025 00:45:43 +0000 (0:00:00.938) 0:00:04.076 ******* 2025-11-23 00:48:11.259701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259765 | orchestrator | 2025-11-23 00:48:11.259775 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-11-23 00:48:11.259785 | orchestrator | Sunday 23 November 2025 00:45:44 +0000 (0:00:01.575) 0:00:05.651 ******* 2025-11-23 00:48:11.259794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259907 | orchestrator | 2025-11-23 00:48:11.259919 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-11-23 00:48:11.259930 | orchestrator | Sunday 23 November 2025 00:45:45 +0000 (0:00:01.042) 0:00:06.693 ******* 2025-11-23 00:48:11.259941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.259984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260023 | orchestrator | 2025-11-23 00:48:11.260039 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-11-23 00:48:11.260050 | orchestrator | Sunday 23 November 2025 00:45:47 +0000 (0:00:01.561) 0:00:08.255 ******* 2025-11-23 00:48:11.260062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.260135 | orchestrator | 2025-11-23 00:48:11.260146 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-11-23 00:48:11.260157 | orchestrator | Sunday 23 November 2025 00:45:48 +0000 (0:00:01.236) 0:00:09.492 ******* 2025-11-23 00:48:11.260168 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:48:11.260177 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:48:11.260187 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:48:11.260196 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.260206 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.260215 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.260224 | orchestrator | 2025-11-23 00:48:11.260234 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-11-23 00:48:11.260243 | orchestrator | Sunday 23 November 2025 00:45:51 +0000 (0:00:02.742) 0:00:12.234 ******* 2025-11-23 00:48:11.260255 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-11-23 00:48:11.260272 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-11-23 00:48:11.260288 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-11-23 00:48:11.260310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-11-23 00:48:11.260324 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-11-23 00:48:11.260340 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-11-23 00:48:11.260357 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-23 00:48:11.260372 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-23 00:48:11.260397 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-23 00:48:11.260414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-23 00:48:11.260430 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-23 00:48:11.260446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-23 00:48:11.260462 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-23 00:48:11.260481 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-23 00:48:11.260560 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-23 00:48:11.260579 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-23 00:48:11.260594 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-23 00:48:11.260610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-23 00:48:11.260626 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-23 00:48:11.260644 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-23 00:48:11.260659 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-23 00:48:11.260675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-23 00:48:11.260691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-23 00:48:11.260707 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-23 00:48:11.260723 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-23 00:48:11.260738 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-23 00:48:11.260754 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-23 00:48:11.260770 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-23 00:48:11.260786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-23 00:48:11.260802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-23 00:48:11.260818 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-23 00:48:11.260834 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-23 00:48:11.260850 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-23 00:48:11.260865 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-23 00:48:11.260880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-23 00:48:11.260896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-23 00:48:11.260911 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-23 00:48:11.260927 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-23 00:48:11.260942 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-23 00:48:11.260957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-23 00:48:11.260979 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-23 00:48:11.260995 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-11-23 00:48:11.261011 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-23 00:48:11.261036 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-11-23 00:48:11.261060 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-11-23 00:48:11.261076 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-11-23 00:48:11.261090 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-11-23 00:48:11.261106 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-23 00:48:11.261122 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-11-23 00:48:11.261137 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-23 00:48:11.261152 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-23 00:48:11.261168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-23 00:48:11.261183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-23 00:48:11.261199 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-23 00:48:11.261214 | orchestrator | 2025-11-23 00:48:11.261231 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-23 00:48:11.261246 | orchestrator | Sunday 23 November 2025 00:46:12 +0000 (0:00:20.919) 0:00:33.153 ******* 2025-11-23 00:48:11.261262 | orchestrator | 2025-11-23 00:48:11.261279 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-23 00:48:11.261294 | orchestrator | Sunday 23 November 2025 00:46:12 +0000 (0:00:00.160) 0:00:33.314 ******* 2025-11-23 00:48:11.261308 | orchestrator | 2025-11-23 00:48:11.261323 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-23 00:48:11.261338 | orchestrator | Sunday 23 November 2025 00:46:12 +0000 (0:00:00.061) 0:00:33.375 ******* 2025-11-23 00:48:11.261352 | orchestrator | 2025-11-23 00:48:11.261367 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-23 00:48:11.261383 | orchestrator | Sunday 23 November 2025 00:46:12 +0000 (0:00:00.060) 0:00:33.436 ******* 2025-11-23 00:48:11.261399 | orchestrator | 2025-11-23 00:48:11.261415 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-23 00:48:11.261432 | orchestrator | Sunday 23 November 2025 00:46:12 +0000 (0:00:00.056) 0:00:33.492 ******* 2025-11-23 00:48:11.261448 | orchestrator | 2025-11-23 00:48:11.261465 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-23 00:48:11.261482 | orchestrator | Sunday 23 November 2025 00:46:12 +0000 (0:00:00.057) 0:00:33.549 ******* 2025-11-23 00:48:11.261521 | orchestrator | 2025-11-23 00:48:11.261537 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-11-23 00:48:11.261547 | orchestrator | Sunday 23 November 2025 00:46:12 +0000 (0:00:00.063) 0:00:33.613 ******* 2025-11-23 00:48:11.261557 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:48:11.261566 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:48:11.261576 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:48:11.261585 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.261594 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.261604 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.261613 | orchestrator | 2025-11-23 00:48:11.261623 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-11-23 00:48:11.261642 | orchestrator | Sunday 23 November 2025 00:46:14 +0000 (0:00:01.518) 0:00:35.132 ******* 2025-11-23 00:48:11.261652 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.261661 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:48:11.261671 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.261680 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:48:11.261689 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:48:11.261699 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.261708 | orchestrator | 2025-11-23 00:48:11.261718 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-11-23 00:48:11.261727 | orchestrator | 2025-11-23 00:48:11.261736 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-23 00:48:11.261746 | orchestrator | Sunday 23 November 2025 00:46:51 +0000 (0:00:37.708) 0:01:12.840 ******* 2025-11-23 00:48:11.261755 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:48:11.261765 | orchestrator | 2025-11-23 00:48:11.261774 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-23 00:48:11.261784 | orchestrator | Sunday 23 November 2025 00:46:52 +0000 (0:00:00.572) 0:01:13.412 ******* 2025-11-23 00:48:11.261799 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:48:11.261809 | orchestrator | 2025-11-23 00:48:11.261819 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-11-23 00:48:11.261828 | orchestrator | Sunday 23 November 2025 00:46:52 +0000 (0:00:00.532) 0:01:13.944 ******* 2025-11-23 00:48:11.261838 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.261847 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.261856 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.261866 | orchestrator | 2025-11-23 00:48:11.261875 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-11-23 00:48:11.261885 | orchestrator | Sunday 23 November 2025 00:46:54 +0000 (0:00:01.159) 0:01:15.103 ******* 2025-11-23 00:48:11.261894 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.261904 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.261913 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.261930 | orchestrator | 2025-11-23 00:48:11.261940 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-11-23 00:48:11.261950 | orchestrator | Sunday 23 November 2025 00:46:54 +0000 (0:00:00.442) 0:01:15.546 ******* 2025-11-23 00:48:11.261959 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.261969 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.261978 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.261987 | orchestrator | 2025-11-23 00:48:11.261996 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-11-23 00:48:11.262006 | orchestrator | Sunday 23 November 2025 00:46:54 +0000 (0:00:00.297) 0:01:15.844 ******* 2025-11-23 00:48:11.262064 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.262077 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.262087 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.262096 | orchestrator | 2025-11-23 00:48:11.262106 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-11-23 00:48:11.262115 | orchestrator | Sunday 23 November 2025 00:46:55 +0000 (0:00:00.300) 0:01:16.144 ******* 2025-11-23 00:48:11.262124 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.262134 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.262143 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.262152 | orchestrator | 2025-11-23 00:48:11.262162 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-11-23 00:48:11.262171 | orchestrator | Sunday 23 November 2025 00:46:55 +0000 (0:00:00.471) 0:01:16.615 ******* 2025-11-23 00:48:11.262181 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262190 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262200 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262216 | orchestrator | 2025-11-23 00:48:11.262226 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-11-23 00:48:11.262235 | orchestrator | Sunday 23 November 2025 00:46:55 +0000 (0:00:00.303) 0:01:16.919 ******* 2025-11-23 00:48:11.262245 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262254 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262263 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262273 | orchestrator | 2025-11-23 00:48:11.262282 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-11-23 00:48:11.262292 | orchestrator | Sunday 23 November 2025 00:46:56 +0000 (0:00:00.334) 0:01:17.253 ******* 2025-11-23 00:48:11.262301 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262311 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262320 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262329 | orchestrator | 2025-11-23 00:48:11.262339 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-11-23 00:48:11.262348 | orchestrator | Sunday 23 November 2025 00:46:56 +0000 (0:00:00.345) 0:01:17.598 ******* 2025-11-23 00:48:11.262357 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262367 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262376 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262385 | orchestrator | 2025-11-23 00:48:11.262395 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-11-23 00:48:11.262404 | orchestrator | Sunday 23 November 2025 00:46:57 +0000 (0:00:00.491) 0:01:18.090 ******* 2025-11-23 00:48:11.262414 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262423 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262432 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262442 | orchestrator | 2025-11-23 00:48:11.262451 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-11-23 00:48:11.262460 | orchestrator | Sunday 23 November 2025 00:46:57 +0000 (0:00:00.416) 0:01:18.507 ******* 2025-11-23 00:48:11.262470 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262479 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262489 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262516 | orchestrator | 2025-11-23 00:48:11.262526 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-11-23 00:48:11.262535 | orchestrator | Sunday 23 November 2025 00:46:57 +0000 (0:00:00.272) 0:01:18.780 ******* 2025-11-23 00:48:11.262545 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262554 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262564 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262573 | orchestrator | 2025-11-23 00:48:11.262582 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-11-23 00:48:11.262592 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:00.273) 0:01:19.053 ******* 2025-11-23 00:48:11.262601 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262610 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262620 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262629 | orchestrator | 2025-11-23 00:48:11.262639 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-11-23 00:48:11.262648 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:00.258) 0:01:19.312 ******* 2025-11-23 00:48:11.262657 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262667 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262676 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262685 | orchestrator | 2025-11-23 00:48:11.262695 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-11-23 00:48:11.262704 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:00.406) 0:01:19.719 ******* 2025-11-23 00:48:11.262714 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262723 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262732 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262742 | orchestrator | 2025-11-23 00:48:11.262767 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-11-23 00:48:11.262777 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:00.262) 0:01:19.981 ******* 2025-11-23 00:48:11.262786 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262796 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262805 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262815 | orchestrator | 2025-11-23 00:48:11.262824 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-11-23 00:48:11.262834 | orchestrator | Sunday 23 November 2025 00:46:59 +0000 (0:00:00.261) 0:01:20.242 ******* 2025-11-23 00:48:11.262843 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.262853 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.262869 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.262879 | orchestrator | 2025-11-23 00:48:11.262889 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-23 00:48:11.262898 | orchestrator | Sunday 23 November 2025 00:46:59 +0000 (0:00:00.286) 0:01:20.529 ******* 2025-11-23 00:48:11.262908 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:48:11.262917 | orchestrator | 2025-11-23 00:48:11.262926 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-11-23 00:48:11.262936 | orchestrator | Sunday 23 November 2025 00:47:00 +0000 (0:00:00.660) 0:01:21.190 ******* 2025-11-23 00:48:11.262945 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.262955 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.262964 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.262973 | orchestrator | 2025-11-23 00:48:11.262983 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-11-23 00:48:11.262992 | orchestrator | Sunday 23 November 2025 00:47:00 +0000 (0:00:00.404) 0:01:21.595 ******* 2025-11-23 00:48:11.263001 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.263011 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.263020 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.263029 | orchestrator | 2025-11-23 00:48:11.263039 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-11-23 00:48:11.263048 | orchestrator | Sunday 23 November 2025 00:47:00 +0000 (0:00:00.395) 0:01:21.990 ******* 2025-11-23 00:48:11.263058 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.263067 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.263076 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.263086 | orchestrator | 2025-11-23 00:48:11.263095 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-11-23 00:48:11.263104 | orchestrator | Sunday 23 November 2025 00:47:01 +0000 (0:00:00.467) 0:01:22.458 ******* 2025-11-23 00:48:11.263114 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.263123 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.263132 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.263142 | orchestrator | 2025-11-23 00:48:11.263151 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-11-23 00:48:11.263161 | orchestrator | Sunday 23 November 2025 00:47:01 +0000 (0:00:00.303) 0:01:22.762 ******* 2025-11-23 00:48:11.263170 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.263179 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.263189 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.263198 | orchestrator | 2025-11-23 00:48:11.263207 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-11-23 00:48:11.263217 | orchestrator | Sunday 23 November 2025 00:47:02 +0000 (0:00:00.293) 0:01:23.055 ******* 2025-11-23 00:48:11.263226 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.263236 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.263245 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.263255 | orchestrator | 2025-11-23 00:48:11.263264 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-11-23 00:48:11.263279 | orchestrator | Sunday 23 November 2025 00:47:02 +0000 (0:00:00.338) 0:01:23.393 ******* 2025-11-23 00:48:11.263288 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.263298 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.263307 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.263316 | orchestrator | 2025-11-23 00:48:11.263326 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-11-23 00:48:11.263335 | orchestrator | Sunday 23 November 2025 00:47:02 +0000 (0:00:00.542) 0:01:23.935 ******* 2025-11-23 00:48:11.263344 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.263354 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.263363 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.263372 | orchestrator | 2025-11-23 00:48:11.263382 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-23 00:48:11.263391 | orchestrator | Sunday 23 November 2025 00:47:03 +0000 (0:00:00.596) 0:01:24.532 ******* 2025-11-23 00:48:11.263402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263552 | orchestrator | 2025-11-23 00:48:11.263562 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-23 00:48:11.263572 | orchestrator | Sunday 23 November 2025 00:47:04 +0000 (0:00:01.474) 0:01:26.006 ******* 2025-11-23 00:48:11.263582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263685 | orchestrator | 2025-11-23 00:48:11.263695 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-23 00:48:11.263705 | orchestrator | Sunday 23 November 2025 00:47:08 +0000 (0:00:03.895) 0:01:29.902 ******* 2025-11-23 00:48:11.263715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.263819 | orchestrator | 2025-11-23 00:48:11.263829 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-23 00:48:11.263838 | orchestrator | Sunday 23 November 2025 00:47:10 +0000 (0:00:02.074) 0:01:31.976 ******* 2025-11-23 00:48:11.263848 | orchestrator | 2025-11-23 00:48:11.263858 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-23 00:48:11.263867 | orchestrator | Sunday 23 November 2025 00:47:11 +0000 (0:00:00.213) 0:01:32.189 ******* 2025-11-23 00:48:11.263877 | orchestrator | 2025-11-23 00:48:11.263886 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-23 00:48:11.263895 | orchestrator | Sunday 23 November 2025 00:47:11 +0000 (0:00:00.059) 0:01:32.248 ******* 2025-11-23 00:48:11.263905 | orchestrator | 2025-11-23 00:48:11.263914 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-23 00:48:11.263924 | orchestrator | Sunday 23 November 2025 00:47:11 +0000 (0:00:00.059) 0:01:32.308 ******* 2025-11-23 00:48:11.263933 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.263943 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.263952 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.263962 | orchestrator | 2025-11-23 00:48:11.263971 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-23 00:48:11.263980 | orchestrator | Sunday 23 November 2025 00:47:18 +0000 (0:00:07.382) 0:01:39.691 ******* 2025-11-23 00:48:11.263990 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.263999 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.264008 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.264018 | orchestrator | 2025-11-23 00:48:11.264027 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-23 00:48:11.264036 | orchestrator | Sunday 23 November 2025 00:47:25 +0000 (0:00:06.504) 0:01:46.195 ******* 2025-11-23 00:48:11.264046 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.264055 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.264064 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.264074 | orchestrator | 2025-11-23 00:48:11.264083 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-23 00:48:11.264093 | orchestrator | Sunday 23 November 2025 00:47:32 +0000 (0:00:07.487) 0:01:53.682 ******* 2025-11-23 00:48:11.264102 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.264111 | orchestrator | 2025-11-23 00:48:11.264121 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-23 00:48:11.264130 | orchestrator | Sunday 23 November 2025 00:47:32 +0000 (0:00:00.244) 0:01:53.927 ******* 2025-11-23 00:48:11.264139 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.264149 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.264158 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.264168 | orchestrator | 2025-11-23 00:48:11.264177 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-23 00:48:11.264187 | orchestrator | Sunday 23 November 2025 00:47:33 +0000 (0:00:00.702) 0:01:54.629 ******* 2025-11-23 00:48:11.264196 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.264205 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.264214 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.264224 | orchestrator | 2025-11-23 00:48:11.264233 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-23 00:48:11.264243 | orchestrator | Sunday 23 November 2025 00:47:34 +0000 (0:00:00.580) 0:01:55.210 ******* 2025-11-23 00:48:11.264258 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.264267 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.264281 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.264290 | orchestrator | 2025-11-23 00:48:11.264300 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-23 00:48:11.264309 | orchestrator | Sunday 23 November 2025 00:47:34 +0000 (0:00:00.675) 0:01:55.885 ******* 2025-11-23 00:48:11.264318 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.264328 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.264337 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.264347 | orchestrator | 2025-11-23 00:48:11.264356 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-23 00:48:11.264366 | orchestrator | Sunday 23 November 2025 00:47:35 +0000 (0:00:00.560) 0:01:56.445 ******* 2025-11-23 00:48:11.264375 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.264385 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.264399 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.264409 | orchestrator | 2025-11-23 00:48:11.264419 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-23 00:48:11.264428 | orchestrator | Sunday 23 November 2025 00:47:36 +0000 (0:00:01.013) 0:01:57.459 ******* 2025-11-23 00:48:11.264437 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.264447 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.264456 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.264466 | orchestrator | 2025-11-23 00:48:11.264475 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-11-23 00:48:11.264485 | orchestrator | Sunday 23 November 2025 00:47:37 +0000 (0:00:00.664) 0:01:58.123 ******* 2025-11-23 00:48:11.264494 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.264555 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.264564 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.264574 | orchestrator | 2025-11-23 00:48:11.264583 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-23 00:48:11.264593 | orchestrator | Sunday 23 November 2025 00:47:37 +0000 (0:00:00.248) 0:01:58.371 ******* 2025-11-23 00:48:11.264603 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264613 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264623 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264662 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264672 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264686 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264704 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264714 | orchestrator | 2025-11-23 00:48:11.264724 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-23 00:48:11.264734 | orchestrator | Sunday 23 November 2025 00:47:38 +0000 (0:00:01.364) 0:01:59.736 ******* 2025-11-23 00:48:11.264743 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264753 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264809 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264843 | orchestrator | 2025-11-23 00:48:11.264853 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-23 00:48:11.264863 | orchestrator | Sunday 23 November 2025 00:47:42 +0000 (0:00:03.731) 0:02:03.467 ******* 2025-11-23 00:48:11.264878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264889 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264898 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264940 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264970 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:48:11.264980 | orchestrator | 2025-11-23 00:48:11.264989 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-23 00:48:11.264999 | orchestrator | Sunday 23 November 2025 00:47:45 +0000 (0:00:02.885) 0:02:06.352 ******* 2025-11-23 00:48:11.265009 | orchestrator | 2025-11-23 00:48:11.265023 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-23 00:48:11.265032 | orchestrator | Sunday 23 November 2025 00:47:45 +0000 (0:00:00.068) 0:02:06.421 ******* 2025-11-23 00:48:11.265042 | orchestrator | 2025-11-23 00:48:11.265051 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-23 00:48:11.265061 | orchestrator | Sunday 23 November 2025 00:47:45 +0000 (0:00:00.061) 0:02:06.482 ******* 2025-11-23 00:48:11.265070 | orchestrator | 2025-11-23 00:48:11.265080 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-23 00:48:11.265089 | orchestrator | Sunday 23 November 2025 00:47:45 +0000 (0:00:00.061) 0:02:06.544 ******* 2025-11-23 00:48:11.265099 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.265109 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.265119 | orchestrator | 2025-11-23 00:48:11.265133 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-23 00:48:11.265143 | orchestrator | Sunday 23 November 2025 00:47:51 +0000 (0:00:06.399) 0:02:12.943 ******* 2025-11-23 00:48:11.265152 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.265162 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.265172 | orchestrator | 2025-11-23 00:48:11.265181 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-23 00:48:11.265191 | orchestrator | Sunday 23 November 2025 00:47:58 +0000 (0:00:06.484) 0:02:19.428 ******* 2025-11-23 00:48:11.265200 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:48:11.265210 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:48:11.265219 | orchestrator | 2025-11-23 00:48:11.265229 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-23 00:48:11.265238 | orchestrator | Sunday 23 November 2025 00:48:04 +0000 (0:00:06.408) 0:02:25.837 ******* 2025-11-23 00:48:11.265248 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:48:11.265257 | orchestrator | 2025-11-23 00:48:11.265267 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-23 00:48:11.265283 | orchestrator | Sunday 23 November 2025 00:48:04 +0000 (0:00:00.109) 0:02:25.946 ******* 2025-11-23 00:48:11.265292 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.265302 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.265311 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.265321 | orchestrator | 2025-11-23 00:48:11.265330 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-23 00:48:11.265340 | orchestrator | Sunday 23 November 2025 00:48:05 +0000 (0:00:00.716) 0:02:26.662 ******* 2025-11-23 00:48:11.265349 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.265358 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.265368 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.265377 | orchestrator | 2025-11-23 00:48:11.265387 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-23 00:48:11.265396 | orchestrator | Sunday 23 November 2025 00:48:06 +0000 (0:00:00.613) 0:02:27.275 ******* 2025-11-23 00:48:11.265406 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.265415 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.265425 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.265434 | orchestrator | 2025-11-23 00:48:11.265444 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-23 00:48:11.265453 | orchestrator | Sunday 23 November 2025 00:48:06 +0000 (0:00:00.709) 0:02:27.985 ******* 2025-11-23 00:48:11.265463 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:48:11.265472 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:48:11.265482 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:48:11.265491 | orchestrator | 2025-11-23 00:48:11.265518 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-23 00:48:11.265528 | orchestrator | Sunday 23 November 2025 00:48:07 +0000 (0:00:00.541) 0:02:28.527 ******* 2025-11-23 00:48:11.265537 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.265547 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.265556 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.265566 | orchestrator | 2025-11-23 00:48:11.265576 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-23 00:48:11.265585 | orchestrator | Sunday 23 November 2025 00:48:08 +0000 (0:00:00.662) 0:02:29.189 ******* 2025-11-23 00:48:11.265594 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:48:11.265604 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:48:11.265613 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:48:11.265623 | orchestrator | 2025-11-23 00:48:11.265632 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:48:11.265642 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-23 00:48:11.265652 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-23 00:48:11.265661 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-23 00:48:11.265671 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:48:11.265681 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:48:11.265690 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:48:11.265700 | orchestrator | 2025-11-23 00:48:11.265709 | orchestrator | 2025-11-23 00:48:11.265719 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:48:11.265728 | orchestrator | Sunday 23 November 2025 00:48:08 +0000 (0:00:00.773) 0:02:29.962 ******* 2025-11-23 00:48:11.265738 | orchestrator | =============================================================================== 2025-11-23 00:48:11.265757 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 37.71s 2025-11-23 00:48:11.265766 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.92s 2025-11-23 00:48:11.265776 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.90s 2025-11-23 00:48:11.265785 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.78s 2025-11-23 00:48:11.265795 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.99s 2025-11-23 00:48:11.265805 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.90s 2025-11-23 00:48:11.265814 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.73s 2025-11-23 00:48:11.265829 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.89s 2025-11-23 00:48:11.265839 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.74s 2025-11-23 00:48:11.265848 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.07s 2025-11-23 00:48:11.265858 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.58s 2025-11-23 00:48:11.265867 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.56s 2025-11-23 00:48:11.265877 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.52s 2025-11-23 00:48:11.265887 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2025-11-23 00:48:11.265896 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2025-11-23 00:48:11.265910 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.24s 2025-11-23 00:48:11.265926 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.24s 2025-11-23 00:48:11.265949 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.16s 2025-11-23 00:48:11.265971 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.04s 2025-11-23 00:48:11.265985 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.01s 2025-11-23 00:48:11.265999 | orchestrator | 2025-11-23 00:48:11 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:11.266014 | orchestrator | 2025-11-23 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:14.292797 | orchestrator | 2025-11-23 00:48:14 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:14.298064 | orchestrator | 2025-11-23 00:48:14 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:14.298262 | orchestrator | 2025-11-23 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:17.338325 | orchestrator | 2025-11-23 00:48:17 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:17.340278 | orchestrator | 2025-11-23 00:48:17 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:17.340313 | orchestrator | 2025-11-23 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:20.374733 | orchestrator | 2025-11-23 00:48:20 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:20.376835 | orchestrator | 2025-11-23 00:48:20 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:20.377094 | orchestrator | 2025-11-23 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:23.417968 | orchestrator | 2025-11-23 00:48:23 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:23.420818 | orchestrator | 2025-11-23 00:48:23 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:23.420904 | orchestrator | 2025-11-23 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:26.461284 | orchestrator | 2025-11-23 00:48:26 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:26.462617 | orchestrator | 2025-11-23 00:48:26 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:26.462780 | orchestrator | 2025-11-23 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:29.493917 | orchestrator | 2025-11-23 00:48:29 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:29.494668 | orchestrator | 2025-11-23 00:48:29 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:29.494761 | orchestrator | 2025-11-23 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:32.542969 | orchestrator | 2025-11-23 00:48:32 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:32.547450 | orchestrator | 2025-11-23 00:48:32 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:32.547568 | orchestrator | 2025-11-23 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:35.583401 | orchestrator | 2025-11-23 00:48:35 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:35.585888 | orchestrator | 2025-11-23 00:48:35 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:35.585921 | orchestrator | 2025-11-23 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:38.625202 | orchestrator | 2025-11-23 00:48:38 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:38.625814 | orchestrator | 2025-11-23 00:48:38 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:38.625899 | orchestrator | 2025-11-23 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:41.658801 | orchestrator | 2025-11-23 00:48:41 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:41.660680 | orchestrator | 2025-11-23 00:48:41 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:41.664840 | orchestrator | 2025-11-23 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:44.706893 | orchestrator | 2025-11-23 00:48:44 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:44.709005 | orchestrator | 2025-11-23 00:48:44 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:44.709177 | orchestrator | 2025-11-23 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:47.748179 | orchestrator | 2025-11-23 00:48:47 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:47.749668 | orchestrator | 2025-11-23 00:48:47 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:47.749795 | orchestrator | 2025-11-23 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:50.796678 | orchestrator | 2025-11-23 00:48:50 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:50.799553 | orchestrator | 2025-11-23 00:48:50 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:50.799619 | orchestrator | 2025-11-23 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:53.837642 | orchestrator | 2025-11-23 00:48:53 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:53.838731 | orchestrator | 2025-11-23 00:48:53 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:53.838773 | orchestrator | 2025-11-23 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:56.894521 | orchestrator | 2025-11-23 00:48:56 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:56.895366 | orchestrator | 2025-11-23 00:48:56 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:56.895817 | orchestrator | 2025-11-23 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:48:59.933494 | orchestrator | 2025-11-23 00:48:59 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:48:59.935575 | orchestrator | 2025-11-23 00:48:59 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:48:59.936041 | orchestrator | 2025-11-23 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:02.977294 | orchestrator | 2025-11-23 00:49:02 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:02.977399 | orchestrator | 2025-11-23 00:49:02 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:02.977417 | orchestrator | 2025-11-23 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:06.020051 | orchestrator | 2025-11-23 00:49:06 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:06.021604 | orchestrator | 2025-11-23 00:49:06 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:06.021653 | orchestrator | 2025-11-23 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:09.061825 | orchestrator | 2025-11-23 00:49:09 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:09.063389 | orchestrator | 2025-11-23 00:49:09 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:09.063533 | orchestrator | 2025-11-23 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:12.101195 | orchestrator | 2025-11-23 00:49:12 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:12.102570 | orchestrator | 2025-11-23 00:49:12 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:12.102631 | orchestrator | 2025-11-23 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:15.141296 | orchestrator | 2025-11-23 00:49:15 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:15.143399 | orchestrator | 2025-11-23 00:49:15 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:15.143441 | orchestrator | 2025-11-23 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:18.180263 | orchestrator | 2025-11-23 00:49:18 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:18.181444 | orchestrator | 2025-11-23 00:49:18 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:18.181523 | orchestrator | 2025-11-23 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:21.235390 | orchestrator | 2025-11-23 00:49:21 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:21.235552 | orchestrator | 2025-11-23 00:49:21 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:21.235582 | orchestrator | 2025-11-23 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:24.274326 | orchestrator | 2025-11-23 00:49:24 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:24.275089 | orchestrator | 2025-11-23 00:49:24 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:24.275126 | orchestrator | 2025-11-23 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:27.321816 | orchestrator | 2025-11-23 00:49:27 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:27.321920 | orchestrator | 2025-11-23 00:49:27 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:27.321937 | orchestrator | 2025-11-23 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:30.359575 | orchestrator | 2025-11-23 00:49:30 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:30.359950 | orchestrator | 2025-11-23 00:49:30 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:30.359986 | orchestrator | 2025-11-23 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:33.391849 | orchestrator | 2025-11-23 00:49:33 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:33.393090 | orchestrator | 2025-11-23 00:49:33 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:33.393561 | orchestrator | 2025-11-23 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:36.429193 | orchestrator | 2025-11-23 00:49:36 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:36.431000 | orchestrator | 2025-11-23 00:49:36 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:36.431044 | orchestrator | 2025-11-23 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:39.468756 | orchestrator | 2025-11-23 00:49:39 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:39.471055 | orchestrator | 2025-11-23 00:49:39 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:39.471101 | orchestrator | 2025-11-23 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:42.506781 | orchestrator | 2025-11-23 00:49:42 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:42.507311 | orchestrator | 2025-11-23 00:49:42 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:42.507357 | orchestrator | 2025-11-23 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:45.547164 | orchestrator | 2025-11-23 00:49:45 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:45.549268 | orchestrator | 2025-11-23 00:49:45 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:45.549353 | orchestrator | 2025-11-23 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:48.577677 | orchestrator | 2025-11-23 00:49:48 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:48.579386 | orchestrator | 2025-11-23 00:49:48 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:48.579511 | orchestrator | 2025-11-23 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:51.613163 | orchestrator | 2025-11-23 00:49:51 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:51.614843 | orchestrator | 2025-11-23 00:49:51 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:51.614870 | orchestrator | 2025-11-23 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:54.659483 | orchestrator | 2025-11-23 00:49:54 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:54.661656 | orchestrator | 2025-11-23 00:49:54 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:54.661745 | orchestrator | 2025-11-23 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:49:57.695307 | orchestrator | 2025-11-23 00:49:57 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:49:57.696066 | orchestrator | 2025-11-23 00:49:57 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:49:57.698544 | orchestrator | 2025-11-23 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:00.747201 | orchestrator | 2025-11-23 00:50:00 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:00.747478 | orchestrator | 2025-11-23 00:50:00 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:50:00.747506 | orchestrator | 2025-11-23 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:03.788064 | orchestrator | 2025-11-23 00:50:03 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:03.789644 | orchestrator | 2025-11-23 00:50:03 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:50:03.789821 | orchestrator | 2025-11-23 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:06.820342 | orchestrator | 2025-11-23 00:50:06 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:06.823290 | orchestrator | 2025-11-23 00:50:06 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:50:06.824021 | orchestrator | 2025-11-23 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:09.864876 | orchestrator | 2025-11-23 00:50:09 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:09.866277 | orchestrator | 2025-11-23 00:50:09 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state STARTED 2025-11-23 00:50:09.866369 | orchestrator | 2025-11-23 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:12.912477 | orchestrator | 2025-11-23 00:50:12 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:12.921624 | orchestrator | 2025-11-23 00:50:12 | INFO  | Task e9f1d938-ca16-4682-b0d2-c45465f852a1 is in state SUCCESS 2025-11-23 00:50:12.923075 | orchestrator | 2025-11-23 00:50:12.923124 | orchestrator | 2025-11-23 00:50:12.923138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:50:12.923151 | orchestrator | 2025-11-23 00:50:12.923162 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:50:12.923174 | orchestrator | Sunday 23 November 2025 00:44:28 +0000 (0:00:00.513) 0:00:00.513 ******* 2025-11-23 00:50:12.923186 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.923198 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.923209 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.923220 | orchestrator | 2025-11-23 00:50:12.923231 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:50:12.923242 | orchestrator | Sunday 23 November 2025 00:44:28 +0000 (0:00:00.495) 0:00:01.009 ******* 2025-11-23 00:50:12.923253 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-11-23 00:50:12.923265 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-11-23 00:50:12.923276 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-11-23 00:50:12.923286 | orchestrator | 2025-11-23 00:50:12.923297 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-11-23 00:50:12.923708 | orchestrator | 2025-11-23 00:50:12.923729 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-23 00:50:12.923741 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.578) 0:00:01.587 ******* 2025-11-23 00:50:12.923752 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.923763 | orchestrator | 2025-11-23 00:50:12.923773 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-11-23 00:50:12.923784 | orchestrator | Sunday 23 November 2025 00:44:30 +0000 (0:00:00.942) 0:00:02.530 ******* 2025-11-23 00:50:12.923795 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.923821 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.923832 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.923843 | orchestrator | 2025-11-23 00:50:12.923854 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-23 00:50:12.923864 | orchestrator | Sunday 23 November 2025 00:44:31 +0000 (0:00:01.011) 0:00:03.541 ******* 2025-11-23 00:50:12.923876 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.923888 | orchestrator | 2025-11-23 00:50:12.923898 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-11-23 00:50:12.923909 | orchestrator | Sunday 23 November 2025 00:44:32 +0000 (0:00:01.210) 0:00:04.752 ******* 2025-11-23 00:50:12.923920 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.923930 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.923941 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.923951 | orchestrator | 2025-11-23 00:50:12.923962 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-11-23 00:50:12.923973 | orchestrator | Sunday 23 November 2025 00:44:33 +0000 (0:00:00.641) 0:00:05.393 ******* 2025-11-23 00:50:12.923983 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-23 00:50:12.923994 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-23 00:50:12.924004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-23 00:50:12.924015 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-23 00:50:12.924025 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-23 00:50:12.924036 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-23 00:50:12.924193 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-23 00:50:12.924205 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-23 00:50:12.924215 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-23 00:50:12.924226 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-23 00:50:12.924237 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-23 00:50:12.924247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-23 00:50:12.924258 | orchestrator | 2025-11-23 00:50:12.924269 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-23 00:50:12.924279 | orchestrator | Sunday 23 November 2025 00:44:36 +0000 (0:00:02.803) 0:00:08.197 ******* 2025-11-23 00:50:12.924290 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-23 00:50:12.924302 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-23 00:50:12.924315 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-23 00:50:12.924327 | orchestrator | 2025-11-23 00:50:12.924339 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-23 00:50:12.924363 | orchestrator | Sunday 23 November 2025 00:44:37 +0000 (0:00:00.989) 0:00:09.187 ******* 2025-11-23 00:50:12.924376 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-23 00:50:12.924388 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-23 00:50:12.924401 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-23 00:50:12.924413 | orchestrator | 2025-11-23 00:50:12.924450 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-23 00:50:12.924464 | orchestrator | Sunday 23 November 2025 00:44:38 +0000 (0:00:01.425) 0:00:10.613 ******* 2025-11-23 00:50:12.924477 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-11-23 00:50:12.924488 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.924513 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-11-23 00:50:12.924524 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.924534 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-11-23 00:50:12.924545 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.924555 | orchestrator | 2025-11-23 00:50:12.924566 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-11-23 00:50:12.924577 | orchestrator | Sunday 23 November 2025 00:44:39 +0000 (0:00:00.840) 0:00:11.453 ******* 2025-11-23 00:50:12.924591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.924616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.926302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.926324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.926337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.926413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.926455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.926470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.926496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.926509 | orchestrator | 2025-11-23 00:50:12.926522 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-11-23 00:50:12.926535 | orchestrator | Sunday 23 November 2025 00:44:41 +0000 (0:00:02.423) 0:00:13.877 ******* 2025-11-23 00:50:12.926546 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.926558 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.926569 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.926580 | orchestrator | 2025-11-23 00:50:12.926591 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-11-23 00:50:12.926602 | orchestrator | Sunday 23 November 2025 00:44:44 +0000 (0:00:02.203) 0:00:16.080 ******* 2025-11-23 00:50:12.926613 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-11-23 00:50:12.926624 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-11-23 00:50:12.926635 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-11-23 00:50:12.926645 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-11-23 00:50:12.926656 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-11-23 00:50:12.926667 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-11-23 00:50:12.926677 | orchestrator | 2025-11-23 00:50:12.926688 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-11-23 00:50:12.926708 | orchestrator | Sunday 23 November 2025 00:44:45 +0000 (0:00:01.929) 0:00:18.010 ******* 2025-11-23 00:50:12.926719 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.926730 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.926740 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.926751 | orchestrator | 2025-11-23 00:50:12.926762 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-11-23 00:50:12.926772 | orchestrator | Sunday 23 November 2025 00:44:47 +0000 (0:00:01.463) 0:00:19.473 ******* 2025-11-23 00:50:12.926783 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.926794 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.926805 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.926815 | orchestrator | 2025-11-23 00:50:12.926826 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-11-23 00:50:12.926837 | orchestrator | Sunday 23 November 2025 00:44:49 +0000 (0:00:02.297) 0:00:21.771 ******* 2025-11-23 00:50:12.926849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.926874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.926886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.926905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-23 00:50:12.926917 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.926929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.926947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.926959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.926970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-23 00:50:12.926982 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.927001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.927013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.927029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.927047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-23 00:50:12.927058 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.927070 | orchestrator | 2025-11-23 00:50:12.927080 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-11-23 00:50:12.927092 | orchestrator | Sunday 23 November 2025 00:44:50 +0000 (0:00:00.525) 0:00:22.297 ******* 2025-11-23 00:50:12.927103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.927219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-23 00:50:12.927231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.927284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-23 00:50:12.927304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.927331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc', '__omit_place_holder__11131711432a26494db8ab029a74ead61dba39cc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-23 00:50:12.927350 | orchestrator | 2025-11-23 00:50:12.927361 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-11-23 00:50:12.927372 | orchestrator | Sunday 23 November 2025 00:44:54 +0000 (0:00:04.568) 0:00:26.866 ******* 2025-11-23 00:50:12.927383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.927558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.927571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.927582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.927593 | orchestrator | 2025-11-23 00:50:12.927604 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-11-23 00:50:12.927615 | orchestrator | Sunday 23 November 2025 00:44:58 +0000 (0:00:03.381) 0:00:30.247 ******* 2025-11-23 00:50:12.927627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-23 00:50:12.927639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-23 00:50:12.927649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-23 00:50:12.927660 | orchestrator | 2025-11-23 00:50:12.927671 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-11-23 00:50:12.927682 | orchestrator | Sunday 23 November 2025 00:45:00 +0000 (0:00:02.511) 0:00:32.759 ******* 2025-11-23 00:50:12.927692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-23 00:50:12.927703 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-23 00:50:12.927714 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-23 00:50:12.927725 | orchestrator | 2025-11-23 00:50:12.927750 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-11-23 00:50:12.927761 | orchestrator | Sunday 23 November 2025 00:45:04 +0000 (0:00:04.126) 0:00:36.885 ******* 2025-11-23 00:50:12.927772 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.927783 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.927806 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.927817 | orchestrator | 2025-11-23 00:50:12.927827 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-11-23 00:50:12.927838 | orchestrator | Sunday 23 November 2025 00:45:05 +0000 (0:00:00.848) 0:00:37.734 ******* 2025-11-23 00:50:12.927849 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-23 00:50:12.927861 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-23 00:50:12.927871 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-23 00:50:12.927882 | orchestrator | 2025-11-23 00:50:12.927892 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-11-23 00:50:12.927903 | orchestrator | Sunday 23 November 2025 00:45:08 +0000 (0:00:02.827) 0:00:40.561 ******* 2025-11-23 00:50:12.927914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-23 00:50:12.927924 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-23 00:50:12.927947 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-23 00:50:12.927958 | orchestrator | 2025-11-23 00:50:12.927968 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-11-23 00:50:12.927979 | orchestrator | Sunday 23 November 2025 00:45:10 +0000 (0:00:01.874) 0:00:42.435 ******* 2025-11-23 00:50:12.927990 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-11-23 00:50:12.928001 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-11-23 00:50:12.928011 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-11-23 00:50:12.928022 | orchestrator | 2025-11-23 00:50:12.928033 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-11-23 00:50:12.928044 | orchestrator | Sunday 23 November 2025 00:45:11 +0000 (0:00:01.410) 0:00:43.846 ******* 2025-11-23 00:50:12.928054 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-11-23 00:50:12.928065 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-11-23 00:50:12.928075 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-11-23 00:50:12.928086 | orchestrator | 2025-11-23 00:50:12.928097 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-23 00:50:12.928107 | orchestrator | Sunday 23 November 2025 00:45:13 +0000 (0:00:01.394) 0:00:45.241 ******* 2025-11-23 00:50:12.928118 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.928128 | orchestrator | 2025-11-23 00:50:12.928139 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-11-23 00:50:12.928149 | orchestrator | Sunday 23 November 2025 00:45:13 +0000 (0:00:00.649) 0:00:45.891 ******* 2025-11-23 00:50:12.928161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.928172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.928197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.928209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.928225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.928237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.928248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.928259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.928276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.928287 | orchestrator | 2025-11-23 00:50:12.928298 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-11-23 00:50:12.928309 | orchestrator | Sunday 23 November 2025 00:45:17 +0000 (0:00:03.508) 0:00:49.399 ******* 2025-11-23 00:50:12.928328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928368 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.928379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928421 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.928465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928508 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.928519 | orchestrator | 2025-11-23 00:50:12.928530 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-11-23 00:50:12.928546 | orchestrator | Sunday 23 November 2025 00:45:17 +0000 (0:00:00.602) 0:00:50.002 ******* 2025-11-23 00:50:12.928557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928602 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.928613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928653 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.928669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928710 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.928721 | orchestrator | 2025-11-23 00:50:12.928731 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-23 00:50:12.928742 | orchestrator | Sunday 23 November 2025 00:45:18 +0000 (0:00:00.766) 0:00:50.768 ******* 2025-11-23 00:50:12.928753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928794 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.928805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928852 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.928863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928904 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.928915 | orchestrator | 2025-11-23 00:50:12.928925 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-23 00:50:12.928936 | orchestrator | Sunday 23 November 2025 00:45:19 +0000 (0:00:00.740) 0:00:51.509 ******* 2025-11-23 00:50:12.928947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.928963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.928975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.928992 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.929003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.929015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.929026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.929037 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.929055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.929067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.929083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.929104 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.929115 | orchestrator | 2025-11-23 00:50:12.929126 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-23 00:50:12.929137 | orchestrator | Sunday 23 November 2025 00:45:19 +0000 (0:00:00.514) 0:00:52.023 ******* 2025-11-23 00:50:12.929148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.929159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.929171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.929181 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.929201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.929213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.929229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.929248 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.929260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.929271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.929282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.929293 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.929304 | orchestrator | 2025-11-23 00:50:12.929315 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-11-23 00:50:12.929325 | orchestrator | Sunday 23 November 2025 00:45:20 +0000 (0:00:00.717) 0:00:52.741 ******* 2025-11-23 00:50:12.929336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.929354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.929365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.929383 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.929399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.930713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.930996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.931015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.931026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.931037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.931048 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.931058 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.931080 | orchestrator | 2025-11-23 00:50:12.931090 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-11-23 00:50:12.931100 | orchestrator | Sunday 23 November 2025 00:45:21 +0000 (0:00:00.742) 0:00:53.483 ******* 2025-11-23 00:50:12.931117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.931198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.931212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.931222 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.931232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.931243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.931297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.931316 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.931327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.931394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.932139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.932168 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.932176 | orchestrator | 2025-11-23 00:50:12.932185 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-11-23 00:50:12.932218 | orchestrator | Sunday 23 November 2025 00:45:22 +0000 (0:00:00.686) 0:00:54.170 ******* 2025-11-23 00:50:12.932228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.932237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.932245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.932264 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.932272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.932280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.932292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.932301 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.932372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-23 00:50:12.932384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-23 00:50:12.932392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-23 00:50:12.932401 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.932409 | orchestrator | 2025-11-23 00:50:12.932417 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-11-23 00:50:12.932451 | orchestrator | Sunday 23 November 2025 00:45:22 +0000 (0:00:00.729) 0:00:54.900 ******* 2025-11-23 00:50:12.932460 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-23 00:50:12.932468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-23 00:50:12.932476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-23 00:50:12.932484 | orchestrator | 2025-11-23 00:50:12.932492 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-11-23 00:50:12.932500 | orchestrator | Sunday 23 November 2025 00:45:24 +0000 (0:00:01.937) 0:00:56.838 ******* 2025-11-23 00:50:12.932508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-23 00:50:12.932516 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-23 00:50:12.932524 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-23 00:50:12.932531 | orchestrator | 2025-11-23 00:50:12.932539 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-11-23 00:50:12.932547 | orchestrator | Sunday 23 November 2025 00:45:26 +0000 (0:00:01.521) 0:00:58.359 ******* 2025-11-23 00:50:12.932554 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-23 00:50:12.932562 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-23 00:50:12.932570 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-23 00:50:12.932604 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-23 00:50:12.932613 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.932621 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-23 00:50:12.932633 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.932641 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-23 00:50:12.932649 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.932657 | orchestrator | 2025-11-23 00:50:12.932665 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-11-23 00:50:12.932673 | orchestrator | Sunday 23 November 2025 00:45:27 +0000 (0:00:00.997) 0:00:59.356 ******* 2025-11-23 00:50:12.932728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.932740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.932748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-23 00:50:12.933219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.933236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.933249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-23 00:50:12.933257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.933325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.933336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-23 00:50:12.933952 | orchestrator | 2025-11-23 00:50:12.933972 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-11-23 00:50:12.933981 | orchestrator | Sunday 23 November 2025 00:45:30 +0000 (0:00:02.676) 0:01:02.033 ******* 2025-11-23 00:50:12.933995 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.934003 | orchestrator | 2025-11-23 00:50:12.934011 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-11-23 00:50:12.934059 | orchestrator | Sunday 23 November 2025 00:45:30 +0000 (0:00:00.534) 0:01:02.567 ******* 2025-11-23 00:50:12.934069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-23 00:50:12.934080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.934089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-23 00:50:12.934150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.934160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-23 00:50:12.934178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.934207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934229 | orchestrator | 2025-11-23 00:50:12.934237 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-11-23 00:50:12.934245 | orchestrator | Sunday 23 November 2025 00:45:33 +0000 (0:00:03.158) 0:01:05.725 ******* 2025-11-23 00:50:12.934253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-23 00:50:12.934262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.934270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934291 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.934304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-23 00:50:12.934318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.934326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934342 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.934351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-23 00:50:12.934363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.934376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934398 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.934406 | orchestrator | 2025-11-23 00:50:12.934414 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-11-23 00:50:12.934422 | orchestrator | Sunday 23 November 2025 00:45:34 +0000 (0:00:00.875) 0:01:06.601 ******* 2025-11-23 00:50:12.934486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-23 00:50:12.934496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-23 00:50:12.934504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-23 00:50:12.934514 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.934522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-23 00:50:12.934530 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.934538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-23 00:50:12.934546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-23 00:50:12.934554 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.934561 | orchestrator | 2025-11-23 00:50:12.934569 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-11-23 00:50:12.934577 | orchestrator | Sunday 23 November 2025 00:45:35 +0000 (0:00:00.827) 0:01:07.428 ******* 2025-11-23 00:50:12.934587 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.934596 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.934605 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.934613 | orchestrator | 2025-11-23 00:50:12.934622 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-11-23 00:50:12.934631 | orchestrator | Sunday 23 November 2025 00:45:36 +0000 (0:00:01.202) 0:01:08.630 ******* 2025-11-23 00:50:12.934640 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.934649 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.934658 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.934666 | orchestrator | 2025-11-23 00:50:12.934675 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-11-23 00:50:12.934684 | orchestrator | Sunday 23 November 2025 00:45:38 +0000 (0:00:01.829) 0:01:10.459 ******* 2025-11-23 00:50:12.934693 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.934701 | orchestrator | 2025-11-23 00:50:12.934710 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-11-23 00:50:12.934724 | orchestrator | Sunday 23 November 2025 00:45:39 +0000 (0:00:00.724) 0:01:11.184 ******* 2025-11-23 00:50:12.934743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.934754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.934783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.934816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934840 | orchestrator | 2025-11-23 00:50:12.934848 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-11-23 00:50:12.934856 | orchestrator | Sunday 23 November 2025 00:45:42 +0000 (0:00:03.658) 0:01:14.843 ******* 2025-11-23 00:50:12.934864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.934872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934896 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.934908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.934917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934933 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.934941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.934955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.934973 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.934980 | orchestrator | 2025-11-23 00:50:12.934986 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-11-23 00:50:12.934993 | orchestrator | Sunday 23 November 2025 00:45:43 +0000 (0:00:00.686) 0:01:15.530 ******* 2025-11-23 00:50:12.935000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935014 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935035 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935055 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935061 | orchestrator | 2025-11-23 00:50:12.935068 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-11-23 00:50:12.935074 | orchestrator | Sunday 23 November 2025 00:45:44 +0000 (0:00:00.969) 0:01:16.499 ******* 2025-11-23 00:50:12.935081 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.935092 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.935099 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.935105 | orchestrator | 2025-11-23 00:50:12.935112 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-11-23 00:50:12.935118 | orchestrator | Sunday 23 November 2025 00:45:45 +0000 (0:00:01.259) 0:01:17.758 ******* 2025-11-23 00:50:12.935125 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.935132 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.935138 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.935145 | orchestrator | 2025-11-23 00:50:12.935151 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-11-23 00:50:12.935158 | orchestrator | Sunday 23 November 2025 00:45:47 +0000 (0:00:02.178) 0:01:19.937 ******* 2025-11-23 00:50:12.935164 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935171 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935177 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935184 | orchestrator | 2025-11-23 00:50:12.935191 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-11-23 00:50:12.935197 | orchestrator | Sunday 23 November 2025 00:45:48 +0000 (0:00:00.265) 0:01:20.203 ******* 2025-11-23 00:50:12.935204 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.935210 | orchestrator | 2025-11-23 00:50:12.935217 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-11-23 00:50:12.935223 | orchestrator | Sunday 23 November 2025 00:45:48 +0000 (0:00:00.747) 0:01:20.950 ******* 2025-11-23 00:50:12.935233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-23 00:50:12.935246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-23 00:50:12.935253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-23 00:50:12.935264 | orchestrator | 2025-11-23 00:50:12.935271 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-11-23 00:50:12.935278 | orchestrator | Sunday 23 November 2025 00:45:51 +0000 (0:00:02.560) 0:01:23.511 ******* 2025-11-23 00:50:12.935285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-23 00:50:12.935292 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-23 00:50:12.935305 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-23 00:50:12.935326 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935333 | orchestrator | 2025-11-23 00:50:12.935340 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-11-23 00:50:12.935346 | orchestrator | Sunday 23 November 2025 00:45:54 +0000 (0:00:03.379) 0:01:26.890 ******* 2025-11-23 00:50:12.935354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-23 00:50:12.935362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-23 00:50:12.935373 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-23 00:50:12.935388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-23 00:50:12.935394 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-23 00:50:12.935408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-23 00:50:12.935415 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935422 | orchestrator | 2025-11-23 00:50:12.935440 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-11-23 00:50:12.935447 | orchestrator | Sunday 23 November 2025 00:45:56 +0000 (0:00:02.091) 0:01:28.982 ******* 2025-11-23 00:50:12.935454 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935461 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935467 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935474 | orchestrator | 2025-11-23 00:50:12.935481 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-11-23 00:50:12.935487 | orchestrator | Sunday 23 November 2025 00:45:57 +0000 (0:00:00.659) 0:01:29.641 ******* 2025-11-23 00:50:12.935494 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935501 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935508 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935514 | orchestrator | 2025-11-23 00:50:12.935521 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-11-23 00:50:12.935531 | orchestrator | Sunday 23 November 2025 00:45:58 +0000 (0:00:01.019) 0:01:30.661 ******* 2025-11-23 00:50:12.935538 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.935544 | orchestrator | 2025-11-23 00:50:12.935551 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-11-23 00:50:12.935558 | orchestrator | Sunday 23 November 2025 00:45:59 +0000 (0:00:00.664) 0:01:31.326 ******* 2025-11-23 00:50:12.935569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.935581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.935618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.935629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935685 | orchestrator | 2025-11-23 00:50:12.935692 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-11-23 00:50:12.935698 | orchestrator | Sunday 23 November 2025 00:46:03 +0000 (0:00:03.882) 0:01:35.208 ******* 2025-11-23 00:50:12.935705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.935712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935741 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.935761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935782 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.935807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.935828 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935835 | orchestrator | 2025-11-23 00:50:12.935842 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-11-23 00:50:12.935848 | orchestrator | Sunday 23 November 2025 00:46:04 +0000 (0:00:00.932) 0:01:36.141 ******* 2025-11-23 00:50:12.935855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935869 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.935876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935889 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.935896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-23 00:50:12.935914 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.935920 | orchestrator | 2025-11-23 00:50:12.935927 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-11-23 00:50:12.935937 | orchestrator | Sunday 23 November 2025 00:46:04 +0000 (0:00:00.836) 0:01:36.978 ******* 2025-11-23 00:50:12.935944 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.935950 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.935957 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.935964 | orchestrator | 2025-11-23 00:50:12.935970 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-11-23 00:50:12.935977 | orchestrator | Sunday 23 November 2025 00:46:06 +0000 (0:00:01.359) 0:01:38.337 ******* 2025-11-23 00:50:12.935983 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.935990 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.935996 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.936003 | orchestrator | 2025-11-23 00:50:12.936009 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-11-23 00:50:12.936016 | orchestrator | Sunday 23 November 2025 00:46:08 +0000 (0:00:01.951) 0:01:40.289 ******* 2025-11-23 00:50:12.936026 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.936033 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.936040 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.936046 | orchestrator | 2025-11-23 00:50:12.936053 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-11-23 00:50:12.936060 | orchestrator | Sunday 23 November 2025 00:46:08 +0000 (0:00:00.413) 0:01:40.702 ******* 2025-11-23 00:50:12.936066 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.936073 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.936079 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.936086 | orchestrator | 2025-11-23 00:50:12.936093 | orchestrator | TASK [include_role : designate] ************************************************ 2025-11-23 00:50:12.936099 | orchestrator | Sunday 23 November 2025 00:46:08 +0000 (0:00:00.295) 0:01:40.997 ******* 2025-11-23 00:50:12.936106 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.936112 | orchestrator | 2025-11-23 00:50:12.936119 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-11-23 00:50:12.936125 | orchestrator | Sunday 23 November 2025 00:46:09 +0000 (0:00:00.694) 0:01:41.692 ******* 2025-11-23 00:50:12.936132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 00:50:12.936139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 00:50:12.936151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.936158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.936169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 00:50:12.937837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 00:50:12.937844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 00:50:12.937914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 00:50:12.937927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.937957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938062 | orchestrator | 2025-11-23 00:50:12.938068 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-11-23 00:50:12.938075 | orchestrator | Sunday 23 November 2025 00:46:13 +0000 (0:00:03.394) 0:01:45.086 ******* 2025-11-23 00:50:12.938081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 00:50:12.938092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 00:50:12.938141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 00:50:12.938180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 00:50:12.938194 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.938241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938296 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.938306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 00:50:12.938357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 00:50:12.938366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.938403 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.938472 | orchestrator | 2025-11-23 00:50:12.938479 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-11-23 00:50:12.938485 | orchestrator | Sunday 23 November 2025 00:46:13 +0000 (0:00:00.756) 0:01:45.842 ******* 2025-11-23 00:50:12.938497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-23 00:50:12.938504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-23 00:50:12.938511 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.938517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-23 00:50:12.938569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-23 00:50:12.938579 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.938585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-23 00:50:12.938597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-23 00:50:12.938603 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.938609 | orchestrator | 2025-11-23 00:50:12.938616 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-11-23 00:50:12.938622 | orchestrator | Sunday 23 November 2025 00:46:14 +0000 (0:00:00.890) 0:01:46.732 ******* 2025-11-23 00:50:12.938628 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.938634 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.938640 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.938655 | orchestrator | 2025-11-23 00:50:12.938662 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-11-23 00:50:12.938668 | orchestrator | Sunday 23 November 2025 00:46:16 +0000 (0:00:01.583) 0:01:48.316 ******* 2025-11-23 00:50:12.938674 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.938680 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.938686 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.938692 | orchestrator | 2025-11-23 00:50:12.938737 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-11-23 00:50:12.938744 | orchestrator | Sunday 23 November 2025 00:46:18 +0000 (0:00:01.860) 0:01:50.176 ******* 2025-11-23 00:50:12.938750 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.938757 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.938763 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.938769 | orchestrator | 2025-11-23 00:50:12.938775 | orchestrator | TASK [include_role : glance] *************************************************** 2025-11-23 00:50:12.938781 | orchestrator | Sunday 23 November 2025 00:46:18 +0000 (0:00:00.465) 0:01:50.641 ******* 2025-11-23 00:50:12.938787 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.938793 | orchestrator | 2025-11-23 00:50:12.938799 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-11-23 00:50:12.938805 | orchestrator | Sunday 23 November 2025 00:46:19 +0000 (0:00:01.047) 0:01:51.688 ******* 2025-11-23 00:50:12.938817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:50:12.938886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.938897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:50:12.938948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.938964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:50:12.939014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.939032 | orchestrator | 2025-11-23 00:50:12.939039 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-11-23 00:50:12.939045 | orchestrator | Sunday 23 November 2025 00:46:24 +0000 (0:00:04.502) 0:01:56.190 ******* 2025-11-23 00:50:12.939052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:50:12.939104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.939119 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.939125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:50:12.939136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.939147 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.939195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:50:12.939205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.939217 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.939223 | orchestrator | 2025-11-23 00:50:12.939229 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-11-23 00:50:12.939236 | orchestrator | Sunday 23 November 2025 00:46:27 +0000 (0:00:03.055) 0:01:59.246 ******* 2025-11-23 00:50:12.939246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-23 00:50:12.939296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-23 00:50:12.939305 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.939312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-23 00:50:12.939319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-23 00:50:12.939325 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.939331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-23 00:50:12.939338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-23 00:50:12.939344 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.939351 | orchestrator | 2025-11-23 00:50:12.939357 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-11-23 00:50:12.939368 | orchestrator | Sunday 23 November 2025 00:46:29 +0000 (0:00:02.766) 0:02:02.014 ******* 2025-11-23 00:50:12.939374 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.939380 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.939386 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.939393 | orchestrator | 2025-11-23 00:50:12.939399 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-11-23 00:50:12.939405 | orchestrator | Sunday 23 November 2025 00:46:31 +0000 (0:00:01.298) 0:02:03.312 ******* 2025-11-23 00:50:12.939411 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.939417 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.939441 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.939447 | orchestrator | 2025-11-23 00:50:12.939453 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-11-23 00:50:12.939459 | orchestrator | Sunday 23 November 2025 00:46:33 +0000 (0:00:01.903) 0:02:05.216 ******* 2025-11-23 00:50:12.939466 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.939472 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.939478 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.939484 | orchestrator | 2025-11-23 00:50:12.939490 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-11-23 00:50:12.939500 | orchestrator | Sunday 23 November 2025 00:46:33 +0000 (0:00:00.389) 0:02:05.605 ******* 2025-11-23 00:50:12.939506 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.939512 | orchestrator | 2025-11-23 00:50:12.939518 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-11-23 00:50:12.939524 | orchestrator | Sunday 23 November 2025 00:46:34 +0000 (0:00:00.774) 0:02:06.380 ******* 2025-11-23 00:50:12.939574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 00:50:12.939584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 00:50:12.939591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 00:50:12.939597 | orchestrator | 2025-11-23 00:50:12.939603 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-11-23 00:50:12.939615 | orchestrator | Sunday 23 November 2025 00:46:37 +0000 (0:00:03.000) 0:02:09.381 ******* 2025-11-23 00:50:12.939622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 00:50:12.939628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 00:50:12.939635 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.939641 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.939651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 00:50:12.939658 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.939664 | orchestrator | 2025-11-23 00:50:12.939710 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-11-23 00:50:12.939719 | orchestrator | Sunday 23 November 2025 00:46:37 +0000 (0:00:00.498) 0:02:09.880 ******* 2025-11-23 00:50:12.939725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-23 00:50:12.939732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-23 00:50:12.939738 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.939744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-23 00:50:12.939751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-23 00:50:12.939757 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.939763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-23 00:50:12.939769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-23 00:50:12.939781 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.939787 | orchestrator | 2025-11-23 00:50:12.939793 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-11-23 00:50:12.939799 | orchestrator | Sunday 23 November 2025 00:46:38 +0000 (0:00:00.587) 0:02:10.468 ******* 2025-11-23 00:50:12.939805 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.939812 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.939818 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.939824 | orchestrator | 2025-11-23 00:50:12.939830 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-11-23 00:50:12.939836 | orchestrator | Sunday 23 November 2025 00:46:39 +0000 (0:00:01.272) 0:02:11.741 ******* 2025-11-23 00:50:12.939843 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.939849 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.939855 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.939861 | orchestrator | 2025-11-23 00:50:12.939867 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-11-23 00:50:12.939874 | orchestrator | Sunday 23 November 2025 00:46:41 +0000 (0:00:01.855) 0:02:13.596 ******* 2025-11-23 00:50:12.939880 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.939886 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.939892 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.939898 | orchestrator | 2025-11-23 00:50:12.939904 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-11-23 00:50:12.939910 | orchestrator | Sunday 23 November 2025 00:46:41 +0000 (0:00:00.408) 0:02:14.005 ******* 2025-11-23 00:50:12.939917 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.939923 | orchestrator | 2025-11-23 00:50:12.939929 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-11-23 00:50:12.939935 | orchestrator | Sunday 23 November 2025 00:46:42 +0000 (0:00:00.825) 0:02:14.831 ******* 2025-11-23 00:50:12.939987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-23 00:50:12.940004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-23 00:50:12.940092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-23 00:50:12.940112 | orchestrator | 2025-11-23 00:50:12.940118 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-11-23 00:50:12.940125 | orchestrator | Sunday 23 November 2025 00:46:45 +0000 (0:00:03.049) 0:02:17.880 ******* 2025-11-23 00:50:12.940131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-23 00:50:12.940138 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.940190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-23 00:50:12.940205 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.940215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-23 00:50:12.940223 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.940229 | orchestrator | 2025-11-23 00:50:12.940235 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-11-23 00:50:12.940241 | orchestrator | Sunday 23 November 2025 00:46:46 +0000 (0:00:00.935) 0:02:18.816 ******* 2025-11-23 00:50:12.940286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-23 00:50:12.940296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-23 00:50:12.940308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-23 00:50:12.940315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-23 00:50:12.940323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-23 00:50:12.940330 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.940336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-23 00:50:12.940343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-23 00:50:12.940349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-23 00:50:12.940356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-23 00:50:12.940362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-23 00:50:12.940368 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.940375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-23 00:50:12.940381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-23 00:50:12.940391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-23 00:50:12.940448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-23 00:50:12.940464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-23 00:50:12.940470 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.940476 | orchestrator | 2025-11-23 00:50:12.940483 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-11-23 00:50:12.940489 | orchestrator | Sunday 23 November 2025 00:46:47 +0000 (0:00:00.859) 0:02:19.676 ******* 2025-11-23 00:50:12.940495 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.940502 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.940508 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.940514 | orchestrator | 2025-11-23 00:50:12.940520 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-11-23 00:50:12.940526 | orchestrator | Sunday 23 November 2025 00:46:48 +0000 (0:00:01.209) 0:02:20.885 ******* 2025-11-23 00:50:12.940533 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.940539 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.940545 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.940551 | orchestrator | 2025-11-23 00:50:12.940557 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-11-23 00:50:12.940564 | orchestrator | Sunday 23 November 2025 00:46:50 +0000 (0:00:01.808) 0:02:22.694 ******* 2025-11-23 00:50:12.940570 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.940576 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.940582 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.940588 | orchestrator | 2025-11-23 00:50:12.940594 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-11-23 00:50:12.940601 | orchestrator | Sunday 23 November 2025 00:46:50 +0000 (0:00:00.263) 0:02:22.958 ******* 2025-11-23 00:50:12.940607 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.940613 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.940619 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.940625 | orchestrator | 2025-11-23 00:50:12.940631 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-11-23 00:50:12.940637 | orchestrator | Sunday 23 November 2025 00:46:51 +0000 (0:00:00.403) 0:02:23.361 ******* 2025-11-23 00:50:12.940644 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.940650 | orchestrator | 2025-11-23 00:50:12.940656 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-11-23 00:50:12.940662 | orchestrator | Sunday 23 November 2025 00:46:52 +0000 (0:00:00.860) 0:02:24.222 ******* 2025-11-23 00:50:12.940669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:50:12.940677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:50:12.940691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:50:12.940741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:50:12.940751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:50:12.940758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:50:12.940765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:50:12.940780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:50:12.940826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:50:12.940835 | orchestrator | 2025-11-23 00:50:12.940841 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-11-23 00:50:12.940847 | orchestrator | Sunday 23 November 2025 00:46:55 +0000 (0:00:03.552) 0:02:27.775 ******* 2025-11-23 00:50:12.940854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:50:12.940861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:50:12.940867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:50:12.940879 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.940889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:50:12.940935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:50:12.940944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:50:12.940950 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.940957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:50:12.940964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:50:12.940976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:50:12.940983 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.940989 | orchestrator | 2025-11-23 00:50:12.940996 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-11-23 00:50:12.941002 | orchestrator | Sunday 23 November 2025 00:46:56 +0000 (0:00:01.073) 0:02:28.849 ******* 2025-11-23 00:50:12.941012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-23 00:50:12.941019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-23 00:50:12.941026 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.941070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-23 00:50:12.941080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-23 00:50:12.941086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-23 00:50:12.941093 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.941099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-23 00:50:12.941105 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.941112 | orchestrator | 2025-11-23 00:50:12.941118 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-11-23 00:50:12.941124 | orchestrator | Sunday 23 November 2025 00:46:57 +0000 (0:00:00.853) 0:02:29.702 ******* 2025-11-23 00:50:12.941130 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.941136 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.941142 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.941148 | orchestrator | 2025-11-23 00:50:12.941154 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-11-23 00:50:12.941160 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:01.233) 0:02:30.936 ******* 2025-11-23 00:50:12.941166 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.941173 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.941186 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.941193 | orchestrator | 2025-11-23 00:50:12.941199 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-11-23 00:50:12.941205 | orchestrator | Sunday 23 November 2025 00:47:00 +0000 (0:00:01.971) 0:02:32.907 ******* 2025-11-23 00:50:12.941211 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.941217 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.941223 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.941229 | orchestrator | 2025-11-23 00:50:12.941235 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-11-23 00:50:12.941242 | orchestrator | Sunday 23 November 2025 00:47:01 +0000 (0:00:00.424) 0:02:33.331 ******* 2025-11-23 00:50:12.941248 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.941254 | orchestrator | 2025-11-23 00:50:12.941260 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-11-23 00:50:12.941266 | orchestrator | Sunday 23 November 2025 00:47:02 +0000 (0:00:00.930) 0:02:34.261 ******* 2025-11-23 00:50:12.941272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 00:50:12.941283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 00:50:12.941341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 00:50:12.941360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941366 | orchestrator | 2025-11-23 00:50:12.941373 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-11-23 00:50:12.941379 | orchestrator | Sunday 23 November 2025 00:47:05 +0000 (0:00:03.606) 0:02:37.868 ******* 2025-11-23 00:50:12.941389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 00:50:12.941475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941486 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.941499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 00:50:12.941506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941512 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.941519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 00:50:12.941529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941536 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.941542 | orchestrator | 2025-11-23 00:50:12.941590 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-11-23 00:50:12.941599 | orchestrator | Sunday 23 November 2025 00:47:06 +0000 (0:00:00.857) 0:02:38.725 ******* 2025-11-23 00:50:12.941606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-23 00:50:12.941613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-23 00:50:12.941624 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.941631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-23 00:50:12.941637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-23 00:50:12.941643 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.941650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-23 00:50:12.941656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-23 00:50:12.941662 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.941668 | orchestrator | 2025-11-23 00:50:12.941675 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-11-23 00:50:12.941681 | orchestrator | Sunday 23 November 2025 00:47:07 +0000 (0:00:00.935) 0:02:39.660 ******* 2025-11-23 00:50:12.941687 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.941693 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.941699 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.941705 | orchestrator | 2025-11-23 00:50:12.941712 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-11-23 00:50:12.941718 | orchestrator | Sunday 23 November 2025 00:47:08 +0000 (0:00:01.282) 0:02:40.943 ******* 2025-11-23 00:50:12.941724 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.941730 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.941736 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.941741 | orchestrator | 2025-11-23 00:50:12.941747 | orchestrator | TASK [include_role : manila] *************************************************** 2025-11-23 00:50:12.941752 | orchestrator | Sunday 23 November 2025 00:47:10 +0000 (0:00:01.950) 0:02:42.894 ******* 2025-11-23 00:50:12.941757 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.941763 | orchestrator | 2025-11-23 00:50:12.941768 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-11-23 00:50:12.941773 | orchestrator | Sunday 23 November 2025 00:47:11 +0000 (0:00:01.094) 0:02:43.989 ******* 2025-11-23 00:50:12.941779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-23 00:50:12.941786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-23 00:50:12.941868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-23 00:50:12.941944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.941961 | orchestrator | 2025-11-23 00:50:12.941966 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-11-23 00:50:12.941972 | orchestrator | Sunday 23 November 2025 00:47:15 +0000 (0:00:03.142) 0:02:47.131 ******* 2025-11-23 00:50:12.941977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-23 00:50:12.941991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942072 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.942078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-23 00:50:12.942084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942109 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.942152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-23 00:50:12.942160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.942177 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.942182 | orchestrator | 2025-11-23 00:50:12.942188 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-11-23 00:50:12.942193 | orchestrator | Sunday 23 November 2025 00:47:15 +0000 (0:00:00.645) 0:02:47.777 ******* 2025-11-23 00:50:12.942199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-23 00:50:12.942211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-23 00:50:12.942216 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.942221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-23 00:50:12.942227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-23 00:50:12.942232 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.942241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-23 00:50:12.942246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-23 00:50:12.942252 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.942257 | orchestrator | 2025-11-23 00:50:12.942263 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-11-23 00:50:12.942268 | orchestrator | Sunday 23 November 2025 00:47:16 +0000 (0:00:00.935) 0:02:48.713 ******* 2025-11-23 00:50:12.942308 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.942316 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.942321 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.942326 | orchestrator | 2025-11-23 00:50:12.942332 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-11-23 00:50:12.942337 | orchestrator | Sunday 23 November 2025 00:47:17 +0000 (0:00:01.298) 0:02:50.012 ******* 2025-11-23 00:50:12.942343 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.942348 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.942353 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.942359 | orchestrator | 2025-11-23 00:50:12.942364 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-11-23 00:50:12.942370 | orchestrator | Sunday 23 November 2025 00:47:19 +0000 (0:00:01.840) 0:02:51.852 ******* 2025-11-23 00:50:12.942375 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.942380 | orchestrator | 2025-11-23 00:50:12.942386 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-11-23 00:50:12.942391 | orchestrator | Sunday 23 November 2025 00:47:20 +0000 (0:00:01.125) 0:02:52.978 ******* 2025-11-23 00:50:12.942396 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-23 00:50:12.942402 | orchestrator | 2025-11-23 00:50:12.942407 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-11-23 00:50:12.942412 | orchestrator | Sunday 23 November 2025 00:47:23 +0000 (0:00:02.487) 0:02:55.465 ******* 2025-11-23 00:50:12.942419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:50:12.942444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-23 00:50:12.942450 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.942500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:50:12.942509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-23 00:50:12.942520 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.942529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:50:12.942570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-23 00:50:12.942578 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.942584 | orchestrator | 2025-11-23 00:50:12.942589 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-11-23 00:50:12.942595 | orchestrator | Sunday 23 November 2025 00:47:25 +0000 (0:00:01.843) 0:02:57.308 ******* 2025-11-23 00:50:12.942600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:50:12.942611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-23 00:50:12.942617 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.942660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:50:12.942669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-23 00:50:12.942674 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.942680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:50:12.942694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-23 00:50:12.942700 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.942705 | orchestrator | 2025-11-23 00:50:12.942710 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-11-23 00:50:12.942716 | orchestrator | Sunday 23 November 2025 00:47:27 +0000 (0:00:02.289) 0:02:59.597 ******* 2025-11-23 00:50:12.942755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-23 00:50:12.942764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-23 00:50:12.942769 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.942775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-23 00:50:12.942785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-23 00:50:12.942791 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.942796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-23 00:50:12.942802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-23 00:50:12.942807 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.942813 | orchestrator | 2025-11-23 00:50:12.942818 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-11-23 00:50:12.942824 | orchestrator | Sunday 23 November 2025 00:47:29 +0000 (0:00:02.285) 0:03:01.883 ******* 2025-11-23 00:50:12.942833 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.942839 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.942844 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.942849 | orchestrator | 2025-11-23 00:50:12.942855 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-11-23 00:50:12.942860 | orchestrator | Sunday 23 November 2025 00:47:31 +0000 (0:00:01.607) 0:03:03.490 ******* 2025-11-23 00:50:12.942865 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.942871 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.942876 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.942881 | orchestrator | 2025-11-23 00:50:12.942887 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-11-23 00:50:12.942892 | orchestrator | Sunday 23 November 2025 00:47:32 +0000 (0:00:01.258) 0:03:04.749 ******* 2025-11-23 00:50:12.942933 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.942941 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.942946 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.942952 | orchestrator | 2025-11-23 00:50:12.942957 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-11-23 00:50:12.942962 | orchestrator | Sunday 23 November 2025 00:47:33 +0000 (0:00:00.289) 0:03:05.039 ******* 2025-11-23 00:50:12.942968 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.942977 | orchestrator | 2025-11-23 00:50:12.942983 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-11-23 00:50:12.942988 | orchestrator | Sunday 23 November 2025 00:47:34 +0000 (0:00:01.159) 0:03:06.199 ******* 2025-11-23 00:50:12.942994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-23 00:50:12.943000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-23 00:50:12.943006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-23 00:50:12.943012 | orchestrator | 2025-11-23 00:50:12.943017 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-11-23 00:50:12.943022 | orchestrator | Sunday 23 November 2025 00:47:35 +0000 (0:00:01.333) 0:03:07.532 ******* 2025-11-23 00:50:12.943031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-23 00:50:12.943037 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.943077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-23 00:50:12.943090 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.943095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-23 00:50:12.943101 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.943107 | orchestrator | 2025-11-23 00:50:12.943112 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-11-23 00:50:12.943118 | orchestrator | Sunday 23 November 2025 00:47:35 +0000 (0:00:00.375) 0:03:07.908 ******* 2025-11-23 00:50:12.943123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-23 00:50:12.943129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-23 00:50:12.943135 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.943140 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.943146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-23 00:50:12.943151 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.943157 | orchestrator | 2025-11-23 00:50:12.943162 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-11-23 00:50:12.943168 | orchestrator | Sunday 23 November 2025 00:47:36 +0000 (0:00:00.734) 0:03:08.642 ******* 2025-11-23 00:50:12.943173 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.943178 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.943184 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.943189 | orchestrator | 2025-11-23 00:50:12.943194 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-11-23 00:50:12.943200 | orchestrator | Sunday 23 November 2025 00:47:37 +0000 (0:00:00.403) 0:03:09.046 ******* 2025-11-23 00:50:12.943205 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.943211 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.943216 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.943221 | orchestrator | 2025-11-23 00:50:12.943227 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-11-23 00:50:12.943232 | orchestrator | Sunday 23 November 2025 00:47:38 +0000 (0:00:01.095) 0:03:10.141 ******* 2025-11-23 00:50:12.943237 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.943243 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.943248 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.943259 | orchestrator | 2025-11-23 00:50:12.943264 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-11-23 00:50:12.943269 | orchestrator | Sunday 23 November 2025 00:47:38 +0000 (0:00:00.297) 0:03:10.439 ******* 2025-11-23 00:50:12.943275 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.943280 | orchestrator | 2025-11-23 00:50:12.943285 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-11-23 00:50:12.943294 | orchestrator | Sunday 23 November 2025 00:47:39 +0000 (0:00:01.322) 0:03:11.761 ******* 2025-11-23 00:50:12.943335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 00:50:12.943343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-23 00:50:12.943373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.943451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.943528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.943533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 00:50:12.943546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 00:50:12.943594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-23 00:50:12.943661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-23 00:50:12.943690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.943765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.943841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.943913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.943920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.943926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.943936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.943942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.943947 | orchestrator | 2025-11-23 00:50:12.943953 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-11-23 00:50:12.943958 | orchestrator | Sunday 23 November 2025 00:47:43 +0000 (0:00:03.890) 0:03:15.652 ******* 2025-11-23 00:50:12.944002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 00:50:12.944010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-23 00:50:12.944074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 00:50:12.944082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-23 00:50:12.944177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.944183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.944312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.944320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.944341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 00:50:12.944347 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.944353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.944436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.944451 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.944494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-23 00:50:12.944507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.944595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-23 00:50:12.944629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.944634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-23 00:50:12.944640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-23 00:50:12.944645 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.944651 | orchestrator | 2025-11-23 00:50:12.944656 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-11-23 00:50:12.944665 | orchestrator | Sunday 23 November 2025 00:47:44 +0000 (0:00:01.311) 0:03:16.964 ******* 2025-11-23 00:50:12.944671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-23 00:50:12.944677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-23 00:50:12.944687 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.944711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-23 00:50:12.944717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-23 00:50:12.944723 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.944728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-23 00:50:12.944733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-23 00:50:12.944739 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.944744 | orchestrator | 2025-11-23 00:50:12.944750 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-11-23 00:50:12.944755 | orchestrator | Sunday 23 November 2025 00:47:46 +0000 (0:00:01.790) 0:03:18.754 ******* 2025-11-23 00:50:12.944761 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.944766 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.944771 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.944777 | orchestrator | 2025-11-23 00:50:12.944782 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-11-23 00:50:12.944788 | orchestrator | Sunday 23 November 2025 00:47:48 +0000 (0:00:01.295) 0:03:20.050 ******* 2025-11-23 00:50:12.944793 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.944798 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.944804 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.944809 | orchestrator | 2025-11-23 00:50:12.944815 | orchestrator | TASK [include_role : placement] ************************************************ 2025-11-23 00:50:12.944820 | orchestrator | Sunday 23 November 2025 00:47:49 +0000 (0:00:01.829) 0:03:21.879 ******* 2025-11-23 00:50:12.944825 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.944831 | orchestrator | 2025-11-23 00:50:12.944836 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-11-23 00:50:12.944841 | orchestrator | Sunday 23 November 2025 00:47:50 +0000 (0:00:01.123) 0:03:23.003 ******* 2025-11-23 00:50:12.944847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.944856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.944882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.944890 | orchestrator | 2025-11-23 00:50:12.944895 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-11-23 00:50:12.944901 | orchestrator | Sunday 23 November 2025 00:47:54 +0000 (0:00:03.413) 0:03:26.416 ******* 2025-11-23 00:50:12.944906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.944912 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.944917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.944923 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.944929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.944939 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.944944 | orchestrator | 2025-11-23 00:50:12.944950 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-11-23 00:50:12.944958 | orchestrator | Sunday 23 November 2025 00:47:54 +0000 (0:00:00.431) 0:03:26.848 ******* 2025-11-23 00:50:12.944964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-23 00:50:12.944970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-23 00:50:12.944976 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.944995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945007 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.945013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945024 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.945029 | orchestrator | 2025-11-23 00:50:12.945034 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-11-23 00:50:12.945040 | orchestrator | Sunday 23 November 2025 00:47:55 +0000 (0:00:00.685) 0:03:27.533 ******* 2025-11-23 00:50:12.945045 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.945051 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.945056 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.945062 | orchestrator | 2025-11-23 00:50:12.945067 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-11-23 00:50:12.945072 | orchestrator | Sunday 23 November 2025 00:47:57 +0000 (0:00:01.698) 0:03:29.232 ******* 2025-11-23 00:50:12.945078 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.945083 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.945089 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.945094 | orchestrator | 2025-11-23 00:50:12.945099 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-11-23 00:50:12.945105 | orchestrator | Sunday 23 November 2025 00:47:58 +0000 (0:00:01.676) 0:03:30.908 ******* 2025-11-23 00:50:12.945110 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.945115 | orchestrator | 2025-11-23 00:50:12.945121 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-11-23 00:50:12.945126 | orchestrator | Sunday 23 November 2025 00:48:00 +0000 (0:00:01.380) 0:03:32.288 ******* 2025-11-23 00:50:12.945133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.945147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.945185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.945205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945240 | orchestrator | 2025-11-23 00:50:12.945246 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-11-23 00:50:12.945252 | orchestrator | Sunday 23 November 2025 00:48:04 +0000 (0:00:03.875) 0:03:36.164 ******* 2025-11-23 00:50:12.945259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.945270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945286 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.945307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.945314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945333 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.945340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.945350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.945378 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.945385 | orchestrator | 2025-11-23 00:50:12.945391 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-11-23 00:50:12.945397 | orchestrator | Sunday 23 November 2025 00:48:05 +0000 (0:00:00.966) 0:03:37.131 ******* 2025-11-23 00:50:12.945403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945481 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.945487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945512 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.945518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-23 00:50:12.945540 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.945546 | orchestrator | 2025-11-23 00:50:12.945551 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-11-23 00:50:12.945557 | orchestrator | Sunday 23 November 2025 00:48:05 +0000 (0:00:00.882) 0:03:38.013 ******* 2025-11-23 00:50:12.945562 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.945567 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.945573 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.945578 | orchestrator | 2025-11-23 00:50:12.945587 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-11-23 00:50:12.945593 | orchestrator | Sunday 23 November 2025 00:48:07 +0000 (0:00:01.308) 0:03:39.322 ******* 2025-11-23 00:50:12.945598 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.945604 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.945609 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.945615 | orchestrator | 2025-11-23 00:50:12.945620 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-11-23 00:50:12.945625 | orchestrator | Sunday 23 November 2025 00:48:09 +0000 (0:00:01.817) 0:03:41.140 ******* 2025-11-23 00:50:12.945631 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.945636 | orchestrator | 2025-11-23 00:50:12.945641 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-11-23 00:50:12.945664 | orchestrator | Sunday 23 November 2025 00:48:10 +0000 (0:00:01.473) 0:03:42.613 ******* 2025-11-23 00:50:12.945670 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-11-23 00:50:12.945680 | orchestrator | 2025-11-23 00:50:12.945686 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-11-23 00:50:12.945691 | orchestrator | Sunday 23 November 2025 00:48:11 +0000 (0:00:00.823) 0:03:43.436 ******* 2025-11-23 00:50:12.945697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-23 00:50:12.945703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-23 00:50:12.945708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-23 00:50:12.945714 | orchestrator | 2025-11-23 00:50:12.945720 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-11-23 00:50:12.945725 | orchestrator | Sunday 23 November 2025 00:48:15 +0000 (0:00:03.824) 0:03:47.261 ******* 2025-11-23 00:50:12.945731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.945736 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.945742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.945748 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.945756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.945762 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.945767 | orchestrator | 2025-11-23 00:50:12.945773 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-11-23 00:50:12.945781 | orchestrator | Sunday 23 November 2025 00:48:16 +0000 (0:00:00.946) 0:03:48.208 ******* 2025-11-23 00:50:12.945801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-23 00:50:12.945809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-23 00:50:12.945815 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.945820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-23 00:50:12.945826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-23 00:50:12.945832 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.945837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-23 00:50:12.945843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-23 00:50:12.945848 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.945854 | orchestrator | 2025-11-23 00:50:12.945859 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-23 00:50:12.945865 | orchestrator | Sunday 23 November 2025 00:48:17 +0000 (0:00:01.337) 0:03:49.546 ******* 2025-11-23 00:50:12.945870 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.945876 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.945881 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.945887 | orchestrator | 2025-11-23 00:50:12.945892 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-23 00:50:12.945897 | orchestrator | Sunday 23 November 2025 00:48:19 +0000 (0:00:02.248) 0:03:51.794 ******* 2025-11-23 00:50:12.945903 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.945908 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.945914 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.945919 | orchestrator | 2025-11-23 00:50:12.945924 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-11-23 00:50:12.945930 | orchestrator | Sunday 23 November 2025 00:48:22 +0000 (0:00:02.638) 0:03:54.433 ******* 2025-11-23 00:50:12.945935 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-11-23 00:50:12.945941 | orchestrator | 2025-11-23 00:50:12.945946 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-11-23 00:50:12.945952 | orchestrator | Sunday 23 November 2025 00:48:23 +0000 (0:00:01.108) 0:03:55.541 ******* 2025-11-23 00:50:12.945957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.945969 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.945978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.945984 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.946009 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946055 | orchestrator | 2025-11-23 00:50:12.946062 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-11-23 00:50:12.946067 | orchestrator | Sunday 23 November 2025 00:48:24 +0000 (0:00:01.093) 0:03:56.634 ******* 2025-11-23 00:50:12.946072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.946077 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.946082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.946087 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-23 00:50:12.946097 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946102 | orchestrator | 2025-11-23 00:50:12.946107 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-11-23 00:50:12.946112 | orchestrator | Sunday 23 November 2025 00:48:25 +0000 (0:00:01.168) 0:03:57.803 ******* 2025-11-23 00:50:12.946116 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.946121 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946126 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946131 | orchestrator | 2025-11-23 00:50:12.946141 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-23 00:50:12.946145 | orchestrator | Sunday 23 November 2025 00:48:27 +0000 (0:00:01.587) 0:03:59.391 ******* 2025-11-23 00:50:12.946150 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.946155 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.946160 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.946165 | orchestrator | 2025-11-23 00:50:12.946169 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-23 00:50:12.946174 | orchestrator | Sunday 23 November 2025 00:48:29 +0000 (0:00:02.113) 0:04:01.504 ******* 2025-11-23 00:50:12.946179 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.946184 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.946189 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.946193 | orchestrator | 2025-11-23 00:50:12.946198 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-11-23 00:50:12.946203 | orchestrator | Sunday 23 November 2025 00:48:31 +0000 (0:00:02.523) 0:04:04.027 ******* 2025-11-23 00:50:12.946208 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-11-23 00:50:12.946213 | orchestrator | 2025-11-23 00:50:12.946217 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-11-23 00:50:12.946222 | orchestrator | Sunday 23 November 2025 00:48:32 +0000 (0:00:00.740) 0:04:04.768 ******* 2025-11-23 00:50:12.946230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-23 00:50:12.946235 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.946256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-23 00:50:12.946262 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-23 00:50:12.946272 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946277 | orchestrator | 2025-11-23 00:50:12.946282 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-11-23 00:50:12.946287 | orchestrator | Sunday 23 November 2025 00:48:33 +0000 (0:00:01.165) 0:04:05.933 ******* 2025-11-23 00:50:12.946291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-23 00:50:12.946301 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.946306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-23 00:50:12.946311 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-23 00:50:12.946321 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946326 | orchestrator | 2025-11-23 00:50:12.946331 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-11-23 00:50:12.946336 | orchestrator | Sunday 23 November 2025 00:48:35 +0000 (0:00:01.123) 0:04:07.057 ******* 2025-11-23 00:50:12.946340 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.946345 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946350 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946355 | orchestrator | 2025-11-23 00:50:12.946360 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-23 00:50:12.946364 | orchestrator | Sunday 23 November 2025 00:48:36 +0000 (0:00:01.402) 0:04:08.459 ******* 2025-11-23 00:50:12.946369 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.946374 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.946378 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.946383 | orchestrator | 2025-11-23 00:50:12.946388 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-23 00:50:12.946393 | orchestrator | Sunday 23 November 2025 00:48:38 +0000 (0:00:02.131) 0:04:10.591 ******* 2025-11-23 00:50:12.946397 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.946405 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.946410 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.946415 | orchestrator | 2025-11-23 00:50:12.946420 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-11-23 00:50:12.946474 | orchestrator | Sunday 23 November 2025 00:48:41 +0000 (0:00:02.934) 0:04:13.525 ******* 2025-11-23 00:50:12.946479 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.946484 | orchestrator | 2025-11-23 00:50:12.946488 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-11-23 00:50:12.946493 | orchestrator | Sunday 23 November 2025 00:48:42 +0000 (0:00:01.458) 0:04:14.984 ******* 2025-11-23 00:50:12.946515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.946527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 00:50:12.946532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.946551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.946570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.946581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 00:50:12.946587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 00:50:12.946592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.946638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.946643 | orchestrator | 2025-11-23 00:50:12.946648 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-11-23 00:50:12.946653 | orchestrator | Sunday 23 November 2025 00:48:45 +0000 (0:00:03.001) 0:04:17.985 ******* 2025-11-23 00:50:12.946658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.946664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 00:50:12.946671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.946706 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.946711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.946716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 00:50:12.946721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.946755 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.946767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 00:50:12.946772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 00:50:12.946782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 00:50:12.946787 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946791 | orchestrator | 2025-11-23 00:50:12.946796 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-11-23 00:50:12.946801 | orchestrator | Sunday 23 November 2025 00:48:46 +0000 (0:00:00.646) 0:04:18.631 ******* 2025-11-23 00:50:12.946809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-23 00:50:12.946818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-23 00:50:12.946823 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.946841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-23 00:50:12.946847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-23 00:50:12.946852 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.946857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-23 00:50:12.946862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-23 00:50:12.946866 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.946871 | orchestrator | 2025-11-23 00:50:12.946876 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-11-23 00:50:12.946881 | orchestrator | Sunday 23 November 2025 00:48:47 +0000 (0:00:01.160) 0:04:19.792 ******* 2025-11-23 00:50:12.946886 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.946890 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.946895 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.946900 | orchestrator | 2025-11-23 00:50:12.946905 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-11-23 00:50:12.946909 | orchestrator | Sunday 23 November 2025 00:48:49 +0000 (0:00:01.370) 0:04:21.162 ******* 2025-11-23 00:50:12.946914 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.946919 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.946924 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.946929 | orchestrator | 2025-11-23 00:50:12.946933 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-11-23 00:50:12.946938 | orchestrator | Sunday 23 November 2025 00:48:51 +0000 (0:00:01.969) 0:04:23.132 ******* 2025-11-23 00:50:12.946943 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.946948 | orchestrator | 2025-11-23 00:50:12.946952 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-11-23 00:50:12.946957 | orchestrator | Sunday 23 November 2025 00:48:52 +0000 (0:00:01.290) 0:04:24.422 ******* 2025-11-23 00:50:12.946962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:50:12.946968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:50:12.946993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:50:12.947001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:50:12.947007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:50:12.947013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:50:12.947021 | orchestrator | 2025-11-23 00:50:12.947026 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-11-23 00:50:12.947031 | orchestrator | Sunday 23 November 2025 00:48:57 +0000 (0:00:05.023) 0:04:29.446 ******* 2025-11-23 00:50:12.947052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:50:12.947059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:50:12.947064 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:50:12.947074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:50:12.947083 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:50:12.947110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:50:12.947115 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947120 | orchestrator | 2025-11-23 00:50:12.947125 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-11-23 00:50:12.947130 | orchestrator | Sunday 23 November 2025 00:48:58 +0000 (0:00:00.634) 0:04:30.080 ******* 2025-11-23 00:50:12.947135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-23 00:50:12.947140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-23 00:50:12.947145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-23 00:50:12.947150 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-23 00:50:12.947165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-23 00:50:12.947170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-23 00:50:12.947175 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-23 00:50:12.947185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-23 00:50:12.947190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-23 00:50:12.947195 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947199 | orchestrator | 2025-11-23 00:50:12.947207 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-11-23 00:50:12.947212 | orchestrator | Sunday 23 November 2025 00:48:58 +0000 (0:00:00.847) 0:04:30.928 ******* 2025-11-23 00:50:12.947217 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947222 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947226 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947231 | orchestrator | 2025-11-23 00:50:12.947236 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-11-23 00:50:12.947241 | orchestrator | Sunday 23 November 2025 00:48:59 +0000 (0:00:00.735) 0:04:31.664 ******* 2025-11-23 00:50:12.947245 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947250 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947255 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947260 | orchestrator | 2025-11-23 00:50:12.947279 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-11-23 00:50:12.947285 | orchestrator | Sunday 23 November 2025 00:49:00 +0000 (0:00:01.197) 0:04:32.861 ******* 2025-11-23 00:50:12.947289 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.947294 | orchestrator | 2025-11-23 00:50:12.947299 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-11-23 00:50:12.947304 | orchestrator | Sunday 23 November 2025 00:49:02 +0000 (0:00:01.434) 0:04:34.295 ******* 2025-11-23 00:50:12.947309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-23 00:50:12.947314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-23 00:50:12.947324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:50:12.947329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:50:12.947334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-23 00:50:12.947391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:50:12.947399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-23 00:50:12.947449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-23 00:50:12.947454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-23 00:50:12.947485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-23 00:50:12.947490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-23 00:50:12.947517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-23 00:50:12.947526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947541 | orchestrator | 2025-11-23 00:50:12.947546 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-11-23 00:50:12.947550 | orchestrator | Sunday 23 November 2025 00:49:06 +0000 (0:00:04.031) 0:04:38.327 ******* 2025-11-23 00:50:12.947555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-23 00:50:12.947563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:50:12.947571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-23 00:50:12.947585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:50:12.947595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-23 00:50:12.947619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-23 00:50:12.947629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-23 00:50:12.947645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-23 00:50:12.947669 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-23 00:50:12.947684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:50:12.947732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947750 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-23 00:50:12.947771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-23 00:50:12.947779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:50:12.947795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:50:12.947800 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947805 | orchestrator | 2025-11-23 00:50:12.947810 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-11-23 00:50:12.947815 | orchestrator | Sunday 23 November 2025 00:49:07 +0000 (0:00:00.993) 0:04:39.320 ******* 2025-11-23 00:50:12.947820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-23 00:50:12.947826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-23 00:50:12.947831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-23 00:50:12.947837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-23 00:50:12.947842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-23 00:50:12.947847 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-23 00:50:12.947857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-23 00:50:12.947862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-23 00:50:12.947870 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-23 00:50:12.947884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-23 00:50:12.947889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-23 00:50:12.947896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-23 00:50:12.947901 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947906 | orchestrator | 2025-11-23 00:50:12.947911 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-11-23 00:50:12.947916 | orchestrator | Sunday 23 November 2025 00:49:08 +0000 (0:00:00.890) 0:04:40.211 ******* 2025-11-23 00:50:12.947921 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947926 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947931 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947935 | orchestrator | 2025-11-23 00:50:12.947940 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-11-23 00:50:12.947945 | orchestrator | Sunday 23 November 2025 00:49:08 +0000 (0:00:00.409) 0:04:40.620 ******* 2025-11-23 00:50:12.947950 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.947955 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.947959 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.947964 | orchestrator | 2025-11-23 00:50:12.947969 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-11-23 00:50:12.947973 | orchestrator | Sunday 23 November 2025 00:49:09 +0000 (0:00:01.235) 0:04:41.855 ******* 2025-11-23 00:50:12.947978 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.947983 | orchestrator | 2025-11-23 00:50:12.947988 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-11-23 00:50:12.947992 | orchestrator | Sunday 23 November 2025 00:49:11 +0000 (0:00:01.668) 0:04:43.524 ******* 2025-11-23 00:50:12.947997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:50:12.948003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:50:12.948015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-23 00:50:12.948021 | orchestrator | 2025-11-23 00:50:12.948028 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-11-23 00:50:12.948033 | orchestrator | Sunday 23 November 2025 00:49:13 +0000 (0:00:02.206) 0:04:45.731 ******* 2025-11-23 00:50:12.948038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-23 00:50:12.948044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-23 00:50:12.948049 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948059 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-23 00:50:12.948070 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948075 | orchestrator | 2025-11-23 00:50:12.948079 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-11-23 00:50:12.948084 | orchestrator | Sunday 23 November 2025 00:49:14 +0000 (0:00:00.368) 0:04:46.099 ******* 2025-11-23 00:50:12.948089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-23 00:50:12.948094 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-23 00:50:12.948106 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-23 00:50:12.948116 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948121 | orchestrator | 2025-11-23 00:50:12.948126 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-11-23 00:50:12.948130 | orchestrator | Sunday 23 November 2025 00:49:14 +0000 (0:00:00.792) 0:04:46.892 ******* 2025-11-23 00:50:12.948137 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948142 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948147 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948151 | orchestrator | 2025-11-23 00:50:12.948156 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-11-23 00:50:12.948161 | orchestrator | Sunday 23 November 2025 00:49:15 +0000 (0:00:00.410) 0:04:47.303 ******* 2025-11-23 00:50:12.948166 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948171 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948176 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948180 | orchestrator | 2025-11-23 00:50:12.948185 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-11-23 00:50:12.948190 | orchestrator | Sunday 23 November 2025 00:49:16 +0000 (0:00:01.180) 0:04:48.483 ******* 2025-11-23 00:50:12.948194 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:50:12.948199 | orchestrator | 2025-11-23 00:50:12.948204 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-11-23 00:50:12.948209 | orchestrator | Sunday 23 November 2025 00:49:18 +0000 (0:00:01.661) 0:04:50.145 ******* 2025-11-23 00:50:12.948214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.948222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.948230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.948238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.948244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.948252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-23 00:50:12.948257 | orchestrator | 2025-11-23 00:50:12.948262 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-11-23 00:50:12.948267 | orchestrator | Sunday 23 November 2025 00:49:23 +0000 (0:00:05.406) 0:04:55.551 ******* 2025-11-23 00:50:12.948272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.948283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.948289 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.948303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.948308 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.948320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-23 00:50:12.948325 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948330 | orchestrator | 2025-11-23 00:50:12.948335 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-11-23 00:50:12.948342 | orchestrator | Sunday 23 November 2025 00:49:24 +0000 (0:00:00.582) 0:04:56.133 ******* 2025-11-23 00:50:12.948347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948371 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948396 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-23 00:50:12.948420 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948436 | orchestrator | 2025-11-23 00:50:12.948441 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-11-23 00:50:12.948446 | orchestrator | Sunday 23 November 2025 00:49:25 +0000 (0:00:01.449) 0:04:57.583 ******* 2025-11-23 00:50:12.948450 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.948455 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.948460 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.948465 | orchestrator | 2025-11-23 00:50:12.948469 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-11-23 00:50:12.948474 | orchestrator | Sunday 23 November 2025 00:49:26 +0000 (0:00:01.356) 0:04:58.939 ******* 2025-11-23 00:50:12.948479 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.948484 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.948489 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.948493 | orchestrator | 2025-11-23 00:50:12.948498 | orchestrator | TASK [include_role : swift] **************************************************** 2025-11-23 00:50:12.948508 | orchestrator | Sunday 23 November 2025 00:49:28 +0000 (0:00:01.975) 0:05:00.915 ******* 2025-11-23 00:50:12.948513 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948518 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948523 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948531 | orchestrator | 2025-11-23 00:50:12.948536 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-11-23 00:50:12.948541 | orchestrator | Sunday 23 November 2025 00:49:29 +0000 (0:00:00.293) 0:05:01.209 ******* 2025-11-23 00:50:12.948545 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948550 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948555 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948560 | orchestrator | 2025-11-23 00:50:12.948565 | orchestrator | TASK [include_role : trove] **************************************************** 2025-11-23 00:50:12.948572 | orchestrator | Sunday 23 November 2025 00:49:29 +0000 (0:00:00.279) 0:05:01.488 ******* 2025-11-23 00:50:12.948577 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948581 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948586 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948591 | orchestrator | 2025-11-23 00:50:12.948596 | orchestrator | TASK [include_role : venus] **************************************************** 2025-11-23 00:50:12.948601 | orchestrator | Sunday 23 November 2025 00:49:29 +0000 (0:00:00.528) 0:05:02.016 ******* 2025-11-23 00:50:12.948605 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948610 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948615 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948619 | orchestrator | 2025-11-23 00:50:12.948624 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-11-23 00:50:12.948629 | orchestrator | Sunday 23 November 2025 00:49:30 +0000 (0:00:00.293) 0:05:02.310 ******* 2025-11-23 00:50:12.948634 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948638 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948643 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948648 | orchestrator | 2025-11-23 00:50:12.948653 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-11-23 00:50:12.948657 | orchestrator | Sunday 23 November 2025 00:49:30 +0000 (0:00:00.276) 0:05:02.586 ******* 2025-11-23 00:50:12.948662 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948667 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948672 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948676 | orchestrator | 2025-11-23 00:50:12.948681 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-11-23 00:50:12.948686 | orchestrator | Sunday 23 November 2025 00:49:31 +0000 (0:00:00.732) 0:05:03.318 ******* 2025-11-23 00:50:12.948691 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.948696 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.948700 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.948705 | orchestrator | 2025-11-23 00:50:12.948710 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-11-23 00:50:12.948715 | orchestrator | Sunday 23 November 2025 00:49:31 +0000 (0:00:00.601) 0:05:03.920 ******* 2025-11-23 00:50:12.948719 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.948724 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.948729 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.948734 | orchestrator | 2025-11-23 00:50:12.948738 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-11-23 00:50:12.948743 | orchestrator | Sunday 23 November 2025 00:49:32 +0000 (0:00:00.315) 0:05:04.235 ******* 2025-11-23 00:50:12.948748 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.948752 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.948757 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.948762 | orchestrator | 2025-11-23 00:50:12.948767 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-11-23 00:50:12.948771 | orchestrator | Sunday 23 November 2025 00:49:32 +0000 (0:00:00.746) 0:05:04.982 ******* 2025-11-23 00:50:12.948776 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.948781 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.948786 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.948790 | orchestrator | 2025-11-23 00:50:12.948795 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-11-23 00:50:12.948803 | orchestrator | Sunday 23 November 2025 00:49:33 +0000 (0:00:00.995) 0:05:05.977 ******* 2025-11-23 00:50:12.948808 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.948812 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.948817 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.948822 | orchestrator | 2025-11-23 00:50:12.948827 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-11-23 00:50:12.948831 | orchestrator | Sunday 23 November 2025 00:49:34 +0000 (0:00:00.760) 0:05:06.738 ******* 2025-11-23 00:50:12.948837 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.948841 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.948846 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.948851 | orchestrator | 2025-11-23 00:50:12.948856 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-11-23 00:50:12.948861 | orchestrator | Sunday 23 November 2025 00:49:38 +0000 (0:00:04.246) 0:05:10.985 ******* 2025-11-23 00:50:12.948865 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.948870 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.948875 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.948880 | orchestrator | 2025-11-23 00:50:12.948885 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-11-23 00:50:12.948889 | orchestrator | Sunday 23 November 2025 00:49:42 +0000 (0:00:03.707) 0:05:14.692 ******* 2025-11-23 00:50:12.948894 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.948899 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.948904 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.948909 | orchestrator | 2025-11-23 00:50:12.948914 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-11-23 00:50:12.948918 | orchestrator | Sunday 23 November 2025 00:49:58 +0000 (0:00:15.517) 0:05:30.209 ******* 2025-11-23 00:50:12.948923 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.948928 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.948933 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.948937 | orchestrator | 2025-11-23 00:50:12.948942 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-11-23 00:50:12.948947 | orchestrator | Sunday 23 November 2025 00:49:59 +0000 (0:00:00.941) 0:05:31.151 ******* 2025-11-23 00:50:12.948952 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:50:12.948959 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:50:12.948964 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:50:12.948969 | orchestrator | 2025-11-23 00:50:12.948974 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-11-23 00:50:12.948979 | orchestrator | Sunday 23 November 2025 00:50:03 +0000 (0:00:03.995) 0:05:35.146 ******* 2025-11-23 00:50:12.948983 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.948988 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.948993 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.948998 | orchestrator | 2025-11-23 00:50:12.949002 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-11-23 00:50:12.949007 | orchestrator | Sunday 23 November 2025 00:50:03 +0000 (0:00:00.317) 0:05:35.464 ******* 2025-11-23 00:50:12.949012 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.949019 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.949024 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.949029 | orchestrator | 2025-11-23 00:50:12.949033 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-11-23 00:50:12.949038 | orchestrator | Sunday 23 November 2025 00:50:03 +0000 (0:00:00.310) 0:05:35.774 ******* 2025-11-23 00:50:12.949043 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.949047 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.949052 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.949057 | orchestrator | 2025-11-23 00:50:12.949062 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-11-23 00:50:12.949066 | orchestrator | Sunday 23 November 2025 00:50:04 +0000 (0:00:00.528) 0:05:36.302 ******* 2025-11-23 00:50:12.949075 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.949080 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.949084 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.949089 | orchestrator | 2025-11-23 00:50:12.949094 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-11-23 00:50:12.949098 | orchestrator | Sunday 23 November 2025 00:50:04 +0000 (0:00:00.286) 0:05:36.589 ******* 2025-11-23 00:50:12.949103 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.949108 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.949113 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.949117 | orchestrator | 2025-11-23 00:50:12.949122 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-11-23 00:50:12.949127 | orchestrator | Sunday 23 November 2025 00:50:04 +0000 (0:00:00.330) 0:05:36.919 ******* 2025-11-23 00:50:12.949131 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:50:12.949136 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:50:12.949141 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:50:12.949146 | orchestrator | 2025-11-23 00:50:12.949150 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-11-23 00:50:12.949155 | orchestrator | Sunday 23 November 2025 00:50:05 +0000 (0:00:00.297) 0:05:37.217 ******* 2025-11-23 00:50:12.949160 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.949165 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.949169 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.949174 | orchestrator | 2025-11-23 00:50:12.949179 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-11-23 00:50:12.949183 | orchestrator | Sunday 23 November 2025 00:50:10 +0000 (0:00:04.901) 0:05:42.119 ******* 2025-11-23 00:50:12.949188 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:50:12.949193 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:50:12.949198 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:50:12.949202 | orchestrator | 2025-11-23 00:50:12.949207 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:50:12.949212 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-23 00:50:12.949217 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-23 00:50:12.949222 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-23 00:50:12.949227 | orchestrator | 2025-11-23 00:50:12.949231 | orchestrator | 2025-11-23 00:50:12.949236 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:50:12.949241 | orchestrator | Sunday 23 November 2025 00:50:10 +0000 (0:00:00.727) 0:05:42.846 ******* 2025-11-23 00:50:12.949246 | orchestrator | =============================================================================== 2025-11-23 00:50:12.949250 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.52s 2025-11-23 00:50:12.949255 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.41s 2025-11-23 00:50:12.949260 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.02s 2025-11-23 00:50:12.949265 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.90s 2025-11-23 00:50:12.949269 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.57s 2025-11-23 00:50:12.949274 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.50s 2025-11-23 00:50:12.949279 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.25s 2025-11-23 00:50:12.949283 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.13s 2025-11-23 00:50:12.949288 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.03s 2025-11-23 00:50:12.949293 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.00s 2025-11-23 00:50:12.949302 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.89s 2025-11-23 00:50:12.949306 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.88s 2025-11-23 00:50:12.949311 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.88s 2025-11-23 00:50:12.949318 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.82s 2025-11-23 00:50:12.949323 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.71s 2025-11-23 00:50:12.949328 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.66s 2025-11-23 00:50:12.949333 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.61s 2025-11-23 00:50:12.949337 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.55s 2025-11-23 00:50:12.949342 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.51s 2025-11-23 00:50:12.949347 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.41s 2025-11-23 00:50:12.949354 | orchestrator | 2025-11-23 00:50:12 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:12.949359 | orchestrator | 2025-11-23 00:50:12 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:12.949364 | orchestrator | 2025-11-23 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:15.965007 | orchestrator | 2025-11-23 00:50:15 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:15.966512 | orchestrator | 2025-11-23 00:50:15 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:15.969390 | orchestrator | 2025-11-23 00:50:15 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:15.969460 | orchestrator | 2025-11-23 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:19.002526 | orchestrator | 2025-11-23 00:50:19 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:19.003131 | orchestrator | 2025-11-23 00:50:19 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:19.008214 | orchestrator | 2025-11-23 00:50:19 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:19.008250 | orchestrator | 2025-11-23 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:22.036255 | orchestrator | 2025-11-23 00:50:22 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:22.039490 | orchestrator | 2025-11-23 00:50:22 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:22.042570 | orchestrator | 2025-11-23 00:50:22 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:22.042592 | orchestrator | 2025-11-23 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:25.075560 | orchestrator | 2025-11-23 00:50:25 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:25.075661 | orchestrator | 2025-11-23 00:50:25 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:25.075676 | orchestrator | 2025-11-23 00:50:25 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:25.075688 | orchestrator | 2025-11-23 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:28.109272 | orchestrator | 2025-11-23 00:50:28 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:28.110924 | orchestrator | 2025-11-23 00:50:28 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:28.111760 | orchestrator | 2025-11-23 00:50:28 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:28.111814 | orchestrator | 2025-11-23 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:31.177298 | orchestrator | 2025-11-23 00:50:31 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:31.177398 | orchestrator | 2025-11-23 00:50:31 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:31.178718 | orchestrator | 2025-11-23 00:50:31 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:31.178747 | orchestrator | 2025-11-23 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:34.218744 | orchestrator | 2025-11-23 00:50:34 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:34.218869 | orchestrator | 2025-11-23 00:50:34 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:34.219483 | orchestrator | 2025-11-23 00:50:34 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:34.219515 | orchestrator | 2025-11-23 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:37.258673 | orchestrator | 2025-11-23 00:50:37 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:37.258824 | orchestrator | 2025-11-23 00:50:37 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:37.259530 | orchestrator | 2025-11-23 00:50:37 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:37.259558 | orchestrator | 2025-11-23 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:40.297568 | orchestrator | 2025-11-23 00:50:40 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:40.297831 | orchestrator | 2025-11-23 00:50:40 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:40.298839 | orchestrator | 2025-11-23 00:50:40 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:40.298908 | orchestrator | 2025-11-23 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:43.334796 | orchestrator | 2025-11-23 00:50:43 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:43.336687 | orchestrator | 2025-11-23 00:50:43 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:43.338614 | orchestrator | 2025-11-23 00:50:43 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:43.339114 | orchestrator | 2025-11-23 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:46.375698 | orchestrator | 2025-11-23 00:50:46 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:46.375791 | orchestrator | 2025-11-23 00:50:46 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:46.377634 | orchestrator | 2025-11-23 00:50:46 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:46.378104 | orchestrator | 2025-11-23 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:49.417885 | orchestrator | 2025-11-23 00:50:49 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:49.420171 | orchestrator | 2025-11-23 00:50:49 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:49.422139 | orchestrator | 2025-11-23 00:50:49 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:49.422178 | orchestrator | 2025-11-23 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:52.464181 | orchestrator | 2025-11-23 00:50:52 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:52.465308 | orchestrator | 2025-11-23 00:50:52 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:52.466905 | orchestrator | 2025-11-23 00:50:52 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:52.467248 | orchestrator | 2025-11-23 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:55.504827 | orchestrator | 2025-11-23 00:50:55 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:55.506876 | orchestrator | 2025-11-23 00:50:55 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:55.508162 | orchestrator | 2025-11-23 00:50:55 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:55.508189 | orchestrator | 2025-11-23 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:50:58.532553 | orchestrator | 2025-11-23 00:50:58 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:50:58.532669 | orchestrator | 2025-11-23 00:50:58 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:50:58.533430 | orchestrator | 2025-11-23 00:50:58 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:50:58.533470 | orchestrator | 2025-11-23 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:01.558296 | orchestrator | 2025-11-23 00:51:01 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:01.558929 | orchestrator | 2025-11-23 00:51:01 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:01.559705 | orchestrator | 2025-11-23 00:51:01 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:01.559846 | orchestrator | 2025-11-23 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:04.599879 | orchestrator | 2025-11-23 00:51:04 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:04.600725 | orchestrator | 2025-11-23 00:51:04 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:04.601933 | orchestrator | 2025-11-23 00:51:04 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:04.601966 | orchestrator | 2025-11-23 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:07.641087 | orchestrator | 2025-11-23 00:51:07 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:07.642698 | orchestrator | 2025-11-23 00:51:07 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:07.644702 | orchestrator | 2025-11-23 00:51:07 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:07.644924 | orchestrator | 2025-11-23 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:10.690224 | orchestrator | 2025-11-23 00:51:10 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:10.690956 | orchestrator | 2025-11-23 00:51:10 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:10.692650 | orchestrator | 2025-11-23 00:51:10 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:10.692708 | orchestrator | 2025-11-23 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:13.736028 | orchestrator | 2025-11-23 00:51:13 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:13.737302 | orchestrator | 2025-11-23 00:51:13 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:13.739014 | orchestrator | 2025-11-23 00:51:13 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:13.739379 | orchestrator | 2025-11-23 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:16.786263 | orchestrator | 2025-11-23 00:51:16 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:16.787576 | orchestrator | 2025-11-23 00:51:16 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:16.789147 | orchestrator | 2025-11-23 00:51:16 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:16.789249 | orchestrator | 2025-11-23 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:19.830241 | orchestrator | 2025-11-23 00:51:19 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:19.831074 | orchestrator | 2025-11-23 00:51:19 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:19.834199 | orchestrator | 2025-11-23 00:51:19 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:19.834248 | orchestrator | 2025-11-23 00:51:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:22.881241 | orchestrator | 2025-11-23 00:51:22 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:22.882510 | orchestrator | 2025-11-23 00:51:22 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:22.883601 | orchestrator | 2025-11-23 00:51:22 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:22.883638 | orchestrator | 2025-11-23 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:25.925896 | orchestrator | 2025-11-23 00:51:25 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:25.926743 | orchestrator | 2025-11-23 00:51:25 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:25.928201 | orchestrator | 2025-11-23 00:51:25 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:25.928256 | orchestrator | 2025-11-23 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:28.972147 | orchestrator | 2025-11-23 00:51:28 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:28.973785 | orchestrator | 2025-11-23 00:51:28 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:28.976887 | orchestrator | 2025-11-23 00:51:28 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:28.977154 | orchestrator | 2025-11-23 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:32.031388 | orchestrator | 2025-11-23 00:51:32 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:32.032546 | orchestrator | 2025-11-23 00:51:32 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:32.033605 | orchestrator | 2025-11-23 00:51:32 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:32.033638 | orchestrator | 2025-11-23 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:35.074471 | orchestrator | 2025-11-23 00:51:35 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:35.076142 | orchestrator | 2025-11-23 00:51:35 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:35.077650 | orchestrator | 2025-11-23 00:51:35 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:35.077877 | orchestrator | 2025-11-23 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:38.120670 | orchestrator | 2025-11-23 00:51:38 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:38.122469 | orchestrator | 2025-11-23 00:51:38 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:38.123315 | orchestrator | 2025-11-23 00:51:38 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:38.123351 | orchestrator | 2025-11-23 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:41.167192 | orchestrator | 2025-11-23 00:51:41 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:41.167294 | orchestrator | 2025-11-23 00:51:41 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:41.168666 | orchestrator | 2025-11-23 00:51:41 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:41.168812 | orchestrator | 2025-11-23 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:44.218457 | orchestrator | 2025-11-23 00:51:44 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:44.220243 | orchestrator | 2025-11-23 00:51:44 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:44.222075 | orchestrator | 2025-11-23 00:51:44 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:44.222099 | orchestrator | 2025-11-23 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:47.262342 | orchestrator | 2025-11-23 00:51:47 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:47.263930 | orchestrator | 2025-11-23 00:51:47 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:47.265126 | orchestrator | 2025-11-23 00:51:47 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:47.265159 | orchestrator | 2025-11-23 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:50.307137 | orchestrator | 2025-11-23 00:51:50 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:50.308466 | orchestrator | 2025-11-23 00:51:50 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:50.309237 | orchestrator | 2025-11-23 00:51:50 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:50.309263 | orchestrator | 2025-11-23 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:53.348822 | orchestrator | 2025-11-23 00:51:53 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:53.350483 | orchestrator | 2025-11-23 00:51:53 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:53.352151 | orchestrator | 2025-11-23 00:51:53 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:53.352178 | orchestrator | 2025-11-23 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:56.390218 | orchestrator | 2025-11-23 00:51:56 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:56.394416 | orchestrator | 2025-11-23 00:51:56 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:56.394484 | orchestrator | 2025-11-23 00:51:56 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:56.394531 | orchestrator | 2025-11-23 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:51:59.438618 | orchestrator | 2025-11-23 00:51:59 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:51:59.440266 | orchestrator | 2025-11-23 00:51:59 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:51:59.441869 | orchestrator | 2025-11-23 00:51:59 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:51:59.441946 | orchestrator | 2025-11-23 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:02.481771 | orchestrator | 2025-11-23 00:52:02 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state STARTED 2025-11-23 00:52:02.484221 | orchestrator | 2025-11-23 00:52:02 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:02.485301 | orchestrator | 2025-11-23 00:52:02 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:02.485350 | orchestrator | 2025-11-23 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:05.540440 | orchestrator | 2025-11-23 00:52:05.540666 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-23 00:52:05.540688 | orchestrator | 2.16.14 2025-11-23 00:52:05.540700 | orchestrator | 2025-11-23 00:52:05.540711 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-11-23 00:52:05.540723 | orchestrator | 2025-11-23 00:52:05.540734 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-23 00:52:05.540745 | orchestrator | Sunday 23 November 2025 00:41:46 +0000 (0:00:00.632) 0:00:00.632 ******* 2025-11-23 00:52:05.540757 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.540768 | orchestrator | 2025-11-23 00:52:05.540806 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-23 00:52:05.540819 | orchestrator | Sunday 23 November 2025 00:41:47 +0000 (0:00:01.017) 0:00:01.649 ******* 2025-11-23 00:52:05.540829 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.540865 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.540912 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.540982 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.540996 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.541008 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.541020 | orchestrator | 2025-11-23 00:52:05.541032 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-23 00:52:05.541044 | orchestrator | Sunday 23 November 2025 00:41:49 +0000 (0:00:01.528) 0:00:03.177 ******* 2025-11-23 00:52:05.541056 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.541068 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.541080 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.541093 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.541105 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.541117 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.541130 | orchestrator | 2025-11-23 00:52:05.541143 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-23 00:52:05.541154 | orchestrator | Sunday 23 November 2025 00:41:50 +0000 (0:00:00.930) 0:00:04.107 ******* 2025-11-23 00:52:05.541164 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.541175 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.541212 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.541223 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.541234 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.541244 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.541255 | orchestrator | 2025-11-23 00:52:05.541265 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-23 00:52:05.541276 | orchestrator | Sunday 23 November 2025 00:41:51 +0000 (0:00:00.837) 0:00:04.944 ******* 2025-11-23 00:52:05.541287 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.541328 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.541339 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.541370 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.541490 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.541504 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.541514 | orchestrator | 2025-11-23 00:52:05.541525 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-23 00:52:05.541536 | orchestrator | Sunday 23 November 2025 00:41:51 +0000 (0:00:00.749) 0:00:05.693 ******* 2025-11-23 00:52:05.541577 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.541588 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.541598 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.541609 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.541619 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.541630 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.541640 | orchestrator | 2025-11-23 00:52:05.541708 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-23 00:52:05.541719 | orchestrator | Sunday 23 November 2025 00:41:52 +0000 (0:00:00.591) 0:00:06.285 ******* 2025-11-23 00:52:05.541730 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.541740 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.541750 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.541761 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.541771 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.541781 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.541881 | orchestrator | 2025-11-23 00:52:05.541893 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-23 00:52:05.541904 | orchestrator | Sunday 23 November 2025 00:41:53 +0000 (0:00:00.865) 0:00:07.150 ******* 2025-11-23 00:52:05.541914 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.541926 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.541936 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.541947 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.541957 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.541969 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.541979 | orchestrator | 2025-11-23 00:52:05.541990 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-23 00:52:05.542109 | orchestrator | Sunday 23 November 2025 00:41:54 +0000 (0:00:00.844) 0:00:07.995 ******* 2025-11-23 00:52:05.542126 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.542137 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.542147 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.542158 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.542169 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.542179 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.542189 | orchestrator | 2025-11-23 00:52:05.542200 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-23 00:52:05.542211 | orchestrator | Sunday 23 November 2025 00:41:55 +0000 (0:00:01.041) 0:00:09.037 ******* 2025-11-23 00:52:05.542222 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:52:05.542233 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:52:05.542243 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:52:05.542254 | orchestrator | 2025-11-23 00:52:05.542265 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-23 00:52:05.542367 | orchestrator | Sunday 23 November 2025 00:41:55 +0000 (0:00:00.771) 0:00:09.808 ******* 2025-11-23 00:52:05.542445 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.542458 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.542469 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.542501 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.542512 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.542523 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.542533 | orchestrator | 2025-11-23 00:52:05.542544 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-23 00:52:05.542555 | orchestrator | Sunday 23 November 2025 00:41:57 +0000 (0:00:01.303) 0:00:11.112 ******* 2025-11-23 00:52:05.542565 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:52:05.542576 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:52:05.542586 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:52:05.542597 | orchestrator | 2025-11-23 00:52:05.542607 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-23 00:52:05.542797 | orchestrator | Sunday 23 November 2025 00:42:00 +0000 (0:00:02.788) 0:00:13.900 ******* 2025-11-23 00:52:05.542813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-23 00:52:05.542824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-23 00:52:05.542835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-23 00:52:05.542846 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.542891 | orchestrator | 2025-11-23 00:52:05.542904 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-23 00:52:05.542915 | orchestrator | Sunday 23 November 2025 00:42:00 +0000 (0:00:00.591) 0:00:14.492 ******* 2025-11-23 00:52:05.542928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.542954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.542966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.542977 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.542988 | orchestrator | 2025-11-23 00:52:05.542999 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-23 00:52:05.543067 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:00.701) 0:00:15.194 ******* 2025-11-23 00:52:05.543087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.543101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.543121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.543143 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.543155 | orchestrator | 2025-11-23 00:52:05.543166 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-23 00:52:05.543177 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:00.442) 0:00:15.636 ******* 2025-11-23 00:52:05.543228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-23 00:41:57.905203', 'end': '2025-11-23 00:41:58.197309', 'delta': '0:00:00.292106', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.543244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-23 00:41:58.749880', 'end': '2025-11-23 00:41:59.034136', 'delta': '0:00:00.284256', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.543256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-23 00:41:59.522869', 'end': '2025-11-23 00:41:59.812704', 'delta': '0:00:00.289835', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.543267 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.543279 | orchestrator | 2025-11-23 00:52:05.543289 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-23 00:52:05.543343 | orchestrator | Sunday 23 November 2025 00:42:01 +0000 (0:00:00.211) 0:00:15.848 ******* 2025-11-23 00:52:05.543354 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.543681 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.543694 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.543704 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.543713 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.543722 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.543732 | orchestrator | 2025-11-23 00:52:05.543742 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-23 00:52:05.543751 | orchestrator | Sunday 23 November 2025 00:42:04 +0000 (0:00:02.433) 0:00:18.281 ******* 2025-11-23 00:52:05.543761 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.543770 | orchestrator | 2025-11-23 00:52:05.543780 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-23 00:52:05.543798 | orchestrator | Sunday 23 November 2025 00:42:05 +0000 (0:00:00.843) 0:00:19.125 ******* 2025-11-23 00:52:05.543807 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.543817 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.543826 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.543836 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.543846 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.543855 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.543865 | orchestrator | 2025-11-23 00:52:05.543874 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-23 00:52:05.543884 | orchestrator | Sunday 23 November 2025 00:42:07 +0000 (0:00:01.804) 0:00:20.930 ******* 2025-11-23 00:52:05.543894 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.543903 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.543915 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.543932 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544007 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544018 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544028 | orchestrator | 2025-11-23 00:52:05.544038 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-23 00:52:05.544047 | orchestrator | Sunday 23 November 2025 00:42:09 +0000 (0:00:02.242) 0:00:23.172 ******* 2025-11-23 00:52:05.544064 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544074 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544083 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544092 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544101 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544111 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544151 | orchestrator | 2025-11-23 00:52:05.544163 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-23 00:52:05.544173 | orchestrator | Sunday 23 November 2025 00:42:10 +0000 (0:00:01.070) 0:00:24.243 ******* 2025-11-23 00:52:05.544182 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544192 | orchestrator | 2025-11-23 00:52:05.544201 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-23 00:52:05.544210 | orchestrator | Sunday 23 November 2025 00:42:10 +0000 (0:00:00.197) 0:00:24.441 ******* 2025-11-23 00:52:05.544220 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544229 | orchestrator | 2025-11-23 00:52:05.544239 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-23 00:52:05.544248 | orchestrator | Sunday 23 November 2025 00:42:10 +0000 (0:00:00.220) 0:00:24.661 ******* 2025-11-23 00:52:05.544257 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544266 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544276 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544295 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544306 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544315 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544328 | orchestrator | 2025-11-23 00:52:05.544342 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-23 00:52:05.544352 | orchestrator | Sunday 23 November 2025 00:42:11 +0000 (0:00:00.867) 0:00:25.529 ******* 2025-11-23 00:52:05.544362 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544372 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544438 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544450 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544459 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544468 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544478 | orchestrator | 2025-11-23 00:52:05.544487 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-23 00:52:05.544496 | orchestrator | Sunday 23 November 2025 00:42:12 +0000 (0:00:00.949) 0:00:26.479 ******* 2025-11-23 00:52:05.544506 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544525 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544535 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544544 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544553 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544562 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544572 | orchestrator | 2025-11-23 00:52:05.544581 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-23 00:52:05.544596 | orchestrator | Sunday 23 November 2025 00:42:13 +0000 (0:00:00.508) 0:00:26.987 ******* 2025-11-23 00:52:05.544609 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544619 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544629 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544639 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544649 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544658 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544667 | orchestrator | 2025-11-23 00:52:05.544677 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-23 00:52:05.544686 | orchestrator | Sunday 23 November 2025 00:42:13 +0000 (0:00:00.633) 0:00:27.621 ******* 2025-11-23 00:52:05.544696 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544705 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544714 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544724 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544733 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544742 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544752 | orchestrator | 2025-11-23 00:52:05.544761 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-23 00:52:05.544771 | orchestrator | Sunday 23 November 2025 00:42:14 +0000 (0:00:00.649) 0:00:28.270 ******* 2025-11-23 00:52:05.544780 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544789 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544799 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544808 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544818 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544834 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544850 | orchestrator | 2025-11-23 00:52:05.544866 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-23 00:52:05.544882 | orchestrator | Sunday 23 November 2025 00:42:15 +0000 (0:00:00.870) 0:00:29.140 ******* 2025-11-23 00:52:05.544896 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.544905 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.544915 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.544924 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.544934 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.544943 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.544953 | orchestrator | 2025-11-23 00:52:05.544961 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-23 00:52:05.544969 | orchestrator | Sunday 23 November 2025 00:42:15 +0000 (0:00:00.684) 0:00:29.825 ******* 2025-11-23 00:52:05.544985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6', 'dm-uuid-LVM-ZhGuNkuCYqZ22eeL4QIwElfPuYQmz8FFk4w4fzk1FIBSmAJBMs9l4Qsgvq1IIDXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.544995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df', 'dm-uuid-LVM-SWp7HzaJchIhI58WXkMnP8eIugd2c5So0aicnxK8wFcHH4NW03reknDLUhYbEvs4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07', 'dm-uuid-LVM-lUT9gI4lTJmblmstgY3lht2ya3ox2wczhMCrF6ZBLgU835h33UNldGtJ6SvNUZTd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74', 'dm-uuid-LVM-z2ZcrJagA2yYRVfFvkDYSOppstHO3tUqxpuYyNzjcKzfq5DuY7sDUqsVJCykIotj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRexkV-tahH-yA2u-ydrq-8lFY-h4Zu-7IwRw9', 'scsi-0QEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1', 'scsi-SQEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LKWh2Q-vidt-Iviq-pICe-u2at-FlnH-kwWZt0', 'scsi-0QEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4', 'scsi-SQEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f', 'scsi-SQEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f', 'dm-uuid-LVM-mT6XHzw82IsYAS3eWV9p9TcL5Wbh6CszDyTXscQW5taWaSdCTO19EFRaWHgLZy7A'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545269 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.545277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a', 'dm-uuid-LVM-92FCts3GZ5oL8rtXoAyX1IOghxPDxEUkw2J2BM2aYzwtJmNsmvzyRmRQMfeR1BQg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}) [2025-11-23 00:52:05 | INFO  | Task f060dc03-4303-475f-b0c4-a891fe8f9aba is in state SUCCESS 2025-11-23 00:52:05.545359 | orchestrator | 0m 2025-11-23 00:52:05.545368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GBn3eA-uLy5-A6Ym-2hMg-2a6o-thuD-CoyvUV', 'scsi-0QEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0', 'scsi-SQEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part1', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part14', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part15', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part16', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WX2Qma-XamP-cx7n-eYdI-GpT2-3dkl-f9Ja5e', 'scsi-0QEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356', 'scsi-SQEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545501 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.545515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f', 'scsi-SQEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545623 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.545631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545823 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.545834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:52:05.545858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qcAps8-tIDI-tYH2-CroA-Vvkw-JSTi-Q4ra27', 'scsi-0QEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa', 'scsi-SQEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545882 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.545893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-96Gig0-9IFG-aPZ1-2t0N-1h63-VfI2-acyoik', 'scsi-0QEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b', 'scsi-SQEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f', 'scsi-SQEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:52:05.545925 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.545933 | orchestrator | 2025-11-23 00:52:05.545941 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-23 00:52:05.545949 | orchestrator | Sunday 23 November 2025 00:42:17 +0000 (0:00:01.539) 0:00:31.364 ******* 2025-11-23 00:52:05.545962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6', 'dm-uuid-LVM-ZhGuNkuCYqZ22eeL4QIwElfPuYQmz8FFk4w4fzk1FIBSmAJBMs9l4Qsgvq1IIDXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.545985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f', 'dm-uuid-LVM-mT6XHzw82IsYAS3eWV9p9TcL5Wbh6CszDyTXscQW5taWaSdCTO19EFRaWHgLZy7A'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.545999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a', 'dm-uuid-LVM-92FCts3GZ5oL8rtXoAyX1IOghxPDxEUkw2J2BM2aYzwtJmNsmvzyRmRQMfeR1BQg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df', 'dm-uuid-LVM-SWp7HzaJchIhI58WXkMnP8eIugd2c5So0aicnxK8wFcHH4NW03reknDLUhYbEvs4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546098 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546724 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546759 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07', 'dm-uuid-LVM-lUT9gI4lTJmblmstgY3lht2ya3ox2wczhMCrF6ZBLgU835h33UNldGtJ6SvNUZTd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546962 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74', 'dm-uuid-LVM-z2ZcrJagA2yYRVfFvkDYSOppstHO3tUqxpuYyNzjcKzfq5DuY7sDUqsVJCykIotj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.546991 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547100 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547108 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547116 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547244 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part1', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part14', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part15', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part16', 'scsi-SQEMU_QEMU_HARDDISK_0839421d-4e00-46fc-9b28-0fb70e6d13db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547318 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547338 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qcAps8-tIDI-tYH2-CroA-Vvkw-JSTi-Q4ra27', 'scsi-0QEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa', 'scsi-SQEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-96Gig0-9IFG-aPZ1-2t0N-1h63-VfI2-acyoik', 'scsi-0QEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b', 'scsi-SQEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547360 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547454 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f', 'scsi-SQEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547589 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547687 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547695 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547702 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547709 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547782 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547789 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GBn3eA-uLy5-A6Ym-2hMg-2a6o-thuD-CoyvUV', 'scsi-0QEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0', 'scsi-SQEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547872 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547879 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRexkV-tahH-yA2u-ydrq-8lFY-h4Zu-7IwRw9', 'scsi-0QEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1', 'scsi-SQEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547890 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WX2Qma-XamP-cx7n-eYdI-GpT2-3dkl-f9Ja5e', 'scsi-0QEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356', 'scsi-SQEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547968 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.547976 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.547983 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.547990 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LKWh2Q-vidt-Iviq-pICe-u2at-FlnH-kwWZt0', 'scsi-0QEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4', 'scsi-SQEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548045 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_552cdbe4-d2a6-4e41-9a4e-2added6a6c3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f', 'scsi-SQEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548070 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f', 'scsi-SQEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548087 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548099 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.548159 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548169 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.548176 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548183 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.548190 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548197 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548204 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548215 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548228 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548276 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548286 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548309 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a44215a-1b34-44d7-81a4-9c2ea4da2999-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:52:05.548330 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.548337 | orchestrator | 2025-11-23 00:52:05.548408 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-23 00:52:05.548424 | orchestrator | Sunday 23 November 2025 00:42:18 +0000 (0:00:01.081) 0:00:32.446 ******* 2025-11-23 00:52:05.548436 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.548448 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.548459 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.548469 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.548475 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.548482 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.548489 | orchestrator | 2025-11-23 00:52:05.548495 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-23 00:52:05.548502 | orchestrator | Sunday 23 November 2025 00:42:19 +0000 (0:00:00.938) 0:00:33.384 ******* 2025-11-23 00:52:05.548509 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.548516 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.548522 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.548529 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.548535 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.548542 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.548548 | orchestrator | 2025-11-23 00:52:05.548555 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-23 00:52:05.548562 | orchestrator | Sunday 23 November 2025 00:42:20 +0000 (0:00:00.743) 0:00:34.128 ******* 2025-11-23 00:52:05.548568 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.548586 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.548593 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.548600 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.548606 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.548613 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.548619 | orchestrator | 2025-11-23 00:52:05.548626 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-23 00:52:05.548633 | orchestrator | Sunday 23 November 2025 00:42:21 +0000 (0:00:01.137) 0:00:35.265 ******* 2025-11-23 00:52:05.548639 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.548646 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.548652 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.548659 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.548665 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.548672 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.548678 | orchestrator | 2025-11-23 00:52:05.548685 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-23 00:52:05.548691 | orchestrator | Sunday 23 November 2025 00:42:21 +0000 (0:00:00.569) 0:00:35.835 ******* 2025-11-23 00:52:05.548706 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.548712 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.548719 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.548725 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.548732 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.548738 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.548745 | orchestrator | 2025-11-23 00:52:05.548751 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-23 00:52:05.548758 | orchestrator | Sunday 23 November 2025 00:42:22 +0000 (0:00:00.798) 0:00:36.633 ******* 2025-11-23 00:52:05.548765 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.548771 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.548778 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.548784 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.548791 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.548797 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.548803 | orchestrator | 2025-11-23 00:52:05.548810 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-23 00:52:05.548817 | orchestrator | Sunday 23 November 2025 00:42:23 +0000 (0:00:01.049) 0:00:37.683 ******* 2025-11-23 00:52:05.548824 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-23 00:52:05.548831 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-23 00:52:05.548837 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-23 00:52:05.548844 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-23 00:52:05.548850 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-23 00:52:05.548857 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-23 00:52:05.548863 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-23 00:52:05.548870 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-23 00:52:05.548876 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-23 00:52:05.548883 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-23 00:52:05.548889 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-23 00:52:05.548900 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-11-23 00:52:05.548906 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-11-23 00:52:05.548913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-23 00:52:05.548919 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-11-23 00:52:05.548926 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-11-23 00:52:05.548932 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-11-23 00:52:05.548939 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-11-23 00:52:05.548945 | orchestrator | 2025-11-23 00:52:05.548952 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-23 00:52:05.548958 | orchestrator | Sunday 23 November 2025 00:42:27 +0000 (0:00:03.295) 0:00:40.978 ******* 2025-11-23 00:52:05.548965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-23 00:52:05.548972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-23 00:52:05.548978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-23 00:52:05.548985 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.548991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-23 00:52:05.548998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-23 00:52:05.549004 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-23 00:52:05.549011 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.549018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-23 00:52:05.549050 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-23 00:52:05.549059 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-23 00:52:05.549074 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.549082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-23 00:52:05.549090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-23 00:52:05.549098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-23 00:52:05.549105 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.549113 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-23 00:52:05.549121 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-23 00:52:05.549129 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-23 00:52:05.549136 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.549144 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-23 00:52:05.549151 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-23 00:52:05.549159 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-23 00:52:05.549166 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.549174 | orchestrator | 2025-11-23 00:52:05.549182 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-23 00:52:05.549190 | orchestrator | Sunday 23 November 2025 00:42:27 +0000 (0:00:00.651) 0:00:41.630 ******* 2025-11-23 00:52:05.549197 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.549205 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.549213 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.549221 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.549229 | orchestrator | 2025-11-23 00:52:05.549237 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-23 00:52:05.549245 | orchestrator | Sunday 23 November 2025 00:42:29 +0000 (0:00:01.268) 0:00:42.898 ******* 2025-11-23 00:52:05.549253 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.549261 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.549268 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.549276 | orchestrator | 2025-11-23 00:52:05.549284 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-23 00:52:05.549292 | orchestrator | Sunday 23 November 2025 00:42:29 +0000 (0:00:00.492) 0:00:43.391 ******* 2025-11-23 00:52:05.549299 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.549307 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.549315 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.549322 | orchestrator | 2025-11-23 00:52:05.549330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-23 00:52:05.549338 | orchestrator | Sunday 23 November 2025 00:42:29 +0000 (0:00:00.360) 0:00:43.752 ******* 2025-11-23 00:52:05.549345 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.549353 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.549360 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.549368 | orchestrator | 2025-11-23 00:52:05.549376 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-23 00:52:05.549406 | orchestrator | Sunday 23 November 2025 00:42:30 +0000 (0:00:00.786) 0:00:44.539 ******* 2025-11-23 00:52:05.549413 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.549420 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.549427 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.549434 | orchestrator | 2025-11-23 00:52:05.549440 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-23 00:52:05.549447 | orchestrator | Sunday 23 November 2025 00:42:31 +0000 (0:00:00.881) 0:00:45.420 ******* 2025-11-23 00:52:05.549454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.549461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.549467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.549482 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.549489 | orchestrator | 2025-11-23 00:52:05.549495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-23 00:52:05.549502 | orchestrator | Sunday 23 November 2025 00:42:32 +0000 (0:00:00.676) 0:00:46.097 ******* 2025-11-23 00:52:05.549509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.549519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.549526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.549533 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.549539 | orchestrator | 2025-11-23 00:52:05.549546 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-23 00:52:05.549552 | orchestrator | Sunday 23 November 2025 00:42:33 +0000 (0:00:00.955) 0:00:47.052 ******* 2025-11-23 00:52:05.549559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.549566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.549573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.549579 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.549586 | orchestrator | 2025-11-23 00:52:05.549592 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-23 00:52:05.549599 | orchestrator | Sunday 23 November 2025 00:42:33 +0000 (0:00:00.560) 0:00:47.612 ******* 2025-11-23 00:52:05.549606 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.549613 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.549619 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.549626 | orchestrator | 2025-11-23 00:52:05.549632 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-23 00:52:05.549639 | orchestrator | Sunday 23 November 2025 00:42:34 +0000 (0:00:00.306) 0:00:47.919 ******* 2025-11-23 00:52:05.549646 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-23 00:52:05.549653 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-23 00:52:05.549680 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-23 00:52:05.549688 | orchestrator | 2025-11-23 00:52:05.549694 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-23 00:52:05.549701 | orchestrator | Sunday 23 November 2025 00:42:35 +0000 (0:00:01.179) 0:00:49.099 ******* 2025-11-23 00:52:05.549708 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:52:05.549715 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:52:05.549721 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:52:05.549728 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-23 00:52:05.549735 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-23 00:52:05.549741 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-23 00:52:05.549748 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-23 00:52:05.549754 | orchestrator | 2025-11-23 00:52:05.549761 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-23 00:52:05.549767 | orchestrator | Sunday 23 November 2025 00:42:35 +0000 (0:00:00.781) 0:00:49.881 ******* 2025-11-23 00:52:05.549774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:52:05.549780 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:52:05.549787 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:52:05.549793 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-23 00:52:05.549800 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-23 00:52:05.549806 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-23 00:52:05.549818 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-23 00:52:05.549824 | orchestrator | 2025-11-23 00:52:05.549831 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-23 00:52:05.549837 | orchestrator | Sunday 23 November 2025 00:42:37 +0000 (0:00:01.975) 0:00:51.856 ******* 2025-11-23 00:52:05.549844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.549852 | orchestrator | 2025-11-23 00:52:05.549859 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-23 00:52:05.549865 | orchestrator | Sunday 23 November 2025 00:42:39 +0000 (0:00:01.237) 0:00:53.094 ******* 2025-11-23 00:52:05.549872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.549879 | orchestrator | 2025-11-23 00:52:05.549885 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-23 00:52:05.549892 | orchestrator | Sunday 23 November 2025 00:42:40 +0000 (0:00:01.141) 0:00:54.235 ******* 2025-11-23 00:52:05.549898 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.549905 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.549911 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.549918 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.549925 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.549931 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.549938 | orchestrator | 2025-11-23 00:52:05.549944 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-23 00:52:05.549951 | orchestrator | Sunday 23 November 2025 00:42:41 +0000 (0:00:01.063) 0:00:55.299 ******* 2025-11-23 00:52:05.549957 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.549964 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.549970 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.549977 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.549983 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.549990 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.549996 | orchestrator | 2025-11-23 00:52:05.550006 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-23 00:52:05.550048 | orchestrator | Sunday 23 November 2025 00:42:42 +0000 (0:00:01.246) 0:00:56.546 ******* 2025-11-23 00:52:05.550057 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550064 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.550071 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.550077 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550084 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.550091 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550097 | orchestrator | 2025-11-23 00:52:05.550104 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-23 00:52:05.550111 | orchestrator | Sunday 23 November 2025 00:42:43 +0000 (0:00:01.082) 0:00:57.629 ******* 2025-11-23 00:52:05.550117 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550124 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550130 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.550137 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550143 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.550150 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.550156 | orchestrator | 2025-11-23 00:52:05.550163 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-23 00:52:05.550170 | orchestrator | Sunday 23 November 2025 00:42:44 +0000 (0:00:00.764) 0:00:58.394 ******* 2025-11-23 00:52:05.550176 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550183 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.550189 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.550201 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.550207 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.550235 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.550243 | orchestrator | 2025-11-23 00:52:05.550250 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-23 00:52:05.550256 | orchestrator | Sunday 23 November 2025 00:42:45 +0000 (0:00:01.333) 0:00:59.727 ******* 2025-11-23 00:52:05.550263 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550270 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.550276 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.550283 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550289 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550296 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550303 | orchestrator | 2025-11-23 00:52:05.550309 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-23 00:52:05.550316 | orchestrator | Sunday 23 November 2025 00:42:46 +0000 (0:00:00.530) 0:01:00.257 ******* 2025-11-23 00:52:05.550322 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550329 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.550336 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.550342 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550349 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550355 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550362 | orchestrator | 2025-11-23 00:52:05.550368 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-23 00:52:05.550375 | orchestrator | Sunday 23 November 2025 00:42:47 +0000 (0:00:00.682) 0:01:00.940 ******* 2025-11-23 00:52:05.550425 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.550432 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.550439 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.550446 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.550452 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.550459 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.550465 | orchestrator | 2025-11-23 00:52:05.550472 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-23 00:52:05.550478 | orchestrator | Sunday 23 November 2025 00:42:48 +0000 (0:00:01.130) 0:01:02.070 ******* 2025-11-23 00:52:05.550485 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.550491 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.550498 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.550504 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.550511 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.550517 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.550524 | orchestrator | 2025-11-23 00:52:05.550530 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-23 00:52:05.550537 | orchestrator | Sunday 23 November 2025 00:42:49 +0000 (0:00:01.683) 0:01:03.754 ******* 2025-11-23 00:52:05.550544 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550550 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.550557 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.550563 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550570 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550576 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550583 | orchestrator | 2025-11-23 00:52:05.550590 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-23 00:52:05.550597 | orchestrator | Sunday 23 November 2025 00:42:50 +0000 (0:00:01.031) 0:01:04.786 ******* 2025-11-23 00:52:05.550604 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550610 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.550617 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.550623 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.550630 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.550636 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.550643 | orchestrator | 2025-11-23 00:52:05.550649 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-23 00:52:05.550662 | orchestrator | Sunday 23 November 2025 00:42:52 +0000 (0:00:01.798) 0:01:06.584 ******* 2025-11-23 00:52:05.550669 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.550675 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.550682 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.550688 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550695 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550701 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550708 | orchestrator | 2025-11-23 00:52:05.550715 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-23 00:52:05.550721 | orchestrator | Sunday 23 November 2025 00:42:53 +0000 (0:00:00.721) 0:01:07.305 ******* 2025-11-23 00:52:05.550728 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.550734 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.550741 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.550747 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550754 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550760 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550767 | orchestrator | 2025-11-23 00:52:05.550777 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-23 00:52:05.550784 | orchestrator | Sunday 23 November 2025 00:42:54 +0000 (0:00:00.919) 0:01:08.225 ******* 2025-11-23 00:52:05.550791 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.550797 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.550804 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.550810 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550817 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550823 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550830 | orchestrator | 2025-11-23 00:52:05.550836 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-23 00:52:05.550843 | orchestrator | Sunday 23 November 2025 00:42:55 +0000 (0:00:00.875) 0:01:09.101 ******* 2025-11-23 00:52:05.550849 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550856 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.550862 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.550869 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550875 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550882 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550888 | orchestrator | 2025-11-23 00:52:05.550895 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-23 00:52:05.550902 | orchestrator | Sunday 23 November 2025 00:42:56 +0000 (0:00:00.908) 0:01:10.010 ******* 2025-11-23 00:52:05.550908 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550915 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.550921 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.550928 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.550956 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.550963 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.550970 | orchestrator | 2025-11-23 00:52:05.550976 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-23 00:52:05.550982 | orchestrator | Sunday 23 November 2025 00:42:56 +0000 (0:00:00.805) 0:01:10.815 ******* 2025-11-23 00:52:05.550988 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.550994 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.551000 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.551006 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.551013 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.551019 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.551025 | orchestrator | 2025-11-23 00:52:05.551031 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-23 00:52:05.551037 | orchestrator | Sunday 23 November 2025 00:42:57 +0000 (0:00:00.811) 0:01:11.627 ******* 2025-11-23 00:52:05.551043 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.551049 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.551060 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.551066 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.551072 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.551078 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.551084 | orchestrator | 2025-11-23 00:52:05.551090 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-23 00:52:05.551096 | orchestrator | Sunday 23 November 2025 00:42:58 +0000 (0:00:00.677) 0:01:12.305 ******* 2025-11-23 00:52:05.551102 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.551108 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.551114 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.551120 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.551126 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.551132 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.551138 | orchestrator | 2025-11-23 00:52:05.551144 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-11-23 00:52:05.551151 | orchestrator | Sunday 23 November 2025 00:42:59 +0000 (0:00:01.215) 0:01:13.520 ******* 2025-11-23 00:52:05.551157 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.551163 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.551169 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.551175 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.551181 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.551187 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.551193 | orchestrator | 2025-11-23 00:52:05.551199 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-11-23 00:52:05.551205 | orchestrator | Sunday 23 November 2025 00:43:01 +0000 (0:00:01.697) 0:01:15.217 ******* 2025-11-23 00:52:05.551211 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.551217 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.551223 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.551229 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.551235 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.551242 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.551248 | orchestrator | 2025-11-23 00:52:05.551254 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-11-23 00:52:05.551260 | orchestrator | Sunday 23 November 2025 00:43:03 +0000 (0:00:02.503) 0:01:17.720 ******* 2025-11-23 00:52:05.551266 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.551273 | orchestrator | 2025-11-23 00:52:05.551279 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-11-23 00:52:05.551285 | orchestrator | Sunday 23 November 2025 00:43:04 +0000 (0:00:01.064) 0:01:18.785 ******* 2025-11-23 00:52:05.551291 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.551297 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.551303 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.551309 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.551315 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.551321 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.551327 | orchestrator | 2025-11-23 00:52:05.551333 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-11-23 00:52:05.551340 | orchestrator | Sunday 23 November 2025 00:43:05 +0000 (0:00:00.611) 0:01:19.396 ******* 2025-11-23 00:52:05.551346 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.551352 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.551358 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.551364 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.551370 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.551393 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.551401 | orchestrator | 2025-11-23 00:52:05.551407 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-11-23 00:52:05.551419 | orchestrator | Sunday 23 November 2025 00:43:06 +0000 (0:00:00.796) 0:01:20.193 ******* 2025-11-23 00:52:05.551425 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-23 00:52:05.551431 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-23 00:52:05.551437 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-23 00:52:05.551443 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-23 00:52:05.551449 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-23 00:52:05.551455 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-23 00:52:05.551461 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-23 00:52:05.551467 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-23 00:52:05.551473 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-23 00:52:05.551480 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-23 00:52:05.551505 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-23 00:52:05.551512 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-23 00:52:05.551519 | orchestrator | 2025-11-23 00:52:05.551525 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-11-23 00:52:05.551531 | orchestrator | Sunday 23 November 2025 00:43:07 +0000 (0:00:01.215) 0:01:21.408 ******* 2025-11-23 00:52:05.551537 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.551543 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.551549 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.551555 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.551561 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.551567 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.551573 | orchestrator | 2025-11-23 00:52:05.551579 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-11-23 00:52:05.551585 | orchestrator | Sunday 23 November 2025 00:43:08 +0000 (0:00:01.065) 0:01:22.474 ******* 2025-11-23 00:52:05.551591 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.551597 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.551603 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.551609 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.551615 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.551621 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.551628 | orchestrator | 2025-11-23 00:52:05.551634 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-11-23 00:52:05.551640 | orchestrator | Sunday 23 November 2025 00:43:09 +0000 (0:00:00.524) 0:01:22.998 ******* 2025-11-23 00:52:05.551646 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.551652 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.551658 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.551664 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.551670 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.551676 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.551682 | orchestrator | 2025-11-23 00:52:05.551688 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-11-23 00:52:05.551694 | orchestrator | Sunday 23 November 2025 00:43:09 +0000 (0:00:00.719) 0:01:23.718 ******* 2025-11-23 00:52:05.551700 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.551706 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.551712 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.551718 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.551724 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.551731 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.551741 | orchestrator | 2025-11-23 00:52:05.551747 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-11-23 00:52:05.551753 | orchestrator | Sunday 23 November 2025 00:43:10 +0000 (0:00:00.533) 0:01:24.251 ******* 2025-11-23 00:52:05.551760 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.551766 | orchestrator | 2025-11-23 00:52:05.551772 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-11-23 00:52:05.551779 | orchestrator | Sunday 23 November 2025 00:43:11 +0000 (0:00:01.214) 0:01:25.466 ******* 2025-11-23 00:52:05.551785 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.551791 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.551797 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.551803 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.551809 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.551815 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.551821 | orchestrator | 2025-11-23 00:52:05.551827 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-11-23 00:52:05.551833 | orchestrator | Sunday 23 November 2025 00:44:17 +0000 (0:01:05.649) 0:02:31.116 ******* 2025-11-23 00:52:05.551839 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-23 00:52:05.551845 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-23 00:52:05.551851 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-23 00:52:05.551857 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.551864 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-23 00:52:05.551873 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-23 00:52:05.551879 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-23 00:52:05.551886 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.551892 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-23 00:52:05.551898 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-23 00:52:05.551904 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-23 00:52:05.551910 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.551916 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-23 00:52:05.551922 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-23 00:52:05.551928 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-23 00:52:05.551934 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.551940 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-23 00:52:05.551946 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-23 00:52:05.551952 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-23 00:52:05.551958 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.551982 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-23 00:52:05.551989 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-23 00:52:05.551995 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-23 00:52:05.552001 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552008 | orchestrator | 2025-11-23 00:52:05.552014 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-11-23 00:52:05.552020 | orchestrator | Sunday 23 November 2025 00:44:17 +0000 (0:00:00.604) 0:02:31.720 ******* 2025-11-23 00:52:05.552026 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552032 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552043 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552049 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552055 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552061 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552067 | orchestrator | 2025-11-23 00:52:05.552073 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-11-23 00:52:05.552080 | orchestrator | Sunday 23 November 2025 00:44:18 +0000 (0:00:00.670) 0:02:32.390 ******* 2025-11-23 00:52:05.552086 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552092 | orchestrator | 2025-11-23 00:52:05.552098 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-11-23 00:52:05.552104 | orchestrator | Sunday 23 November 2025 00:44:18 +0000 (0:00:00.147) 0:02:32.538 ******* 2025-11-23 00:52:05.552110 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552116 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552122 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552128 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552134 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552140 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552146 | orchestrator | 2025-11-23 00:52:05.552152 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-11-23 00:52:05.552158 | orchestrator | Sunday 23 November 2025 00:44:19 +0000 (0:00:00.513) 0:02:33.052 ******* 2025-11-23 00:52:05.552165 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552171 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552177 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552183 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552189 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552195 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552201 | orchestrator | 2025-11-23 00:52:05.552207 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-11-23 00:52:05.552213 | orchestrator | Sunday 23 November 2025 00:44:19 +0000 (0:00:00.646) 0:02:33.698 ******* 2025-11-23 00:52:05.552219 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552225 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552231 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552237 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552243 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552250 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552256 | orchestrator | 2025-11-23 00:52:05.552262 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-11-23 00:52:05.552268 | orchestrator | Sunday 23 November 2025 00:44:20 +0000 (0:00:00.539) 0:02:34.237 ******* 2025-11-23 00:52:05.552274 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.552280 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.552286 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.552292 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.552298 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.552304 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.552310 | orchestrator | 2025-11-23 00:52:05.552317 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-11-23 00:52:05.552323 | orchestrator | Sunday 23 November 2025 00:44:22 +0000 (0:00:02.317) 0:02:36.554 ******* 2025-11-23 00:52:05.552329 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.552335 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.552341 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.552347 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.552353 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.552359 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.552365 | orchestrator | 2025-11-23 00:52:05.552371 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-11-23 00:52:05.552394 | orchestrator | Sunday 23 November 2025 00:44:23 +0000 (0:00:00.569) 0:02:37.124 ******* 2025-11-23 00:52:05.552401 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.552415 | orchestrator | 2025-11-23 00:52:05.552424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-11-23 00:52:05.552431 | orchestrator | Sunday 23 November 2025 00:44:24 +0000 (0:00:01.233) 0:02:38.358 ******* 2025-11-23 00:52:05.552437 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552443 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552449 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552458 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552467 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552477 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552487 | orchestrator | 2025-11-23 00:52:05.552497 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-11-23 00:52:05.552505 | orchestrator | Sunday 23 November 2025 00:44:25 +0000 (0:00:00.938) 0:02:39.297 ******* 2025-11-23 00:52:05.552512 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552518 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552524 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552530 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552536 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552542 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552548 | orchestrator | 2025-11-23 00:52:05.552554 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-11-23 00:52:05.552561 | orchestrator | Sunday 23 November 2025 00:44:26 +0000 (0:00:00.622) 0:02:39.920 ******* 2025-11-23 00:52:05.552567 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552573 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552601 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552608 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552614 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552621 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552627 | orchestrator | 2025-11-23 00:52:05.552633 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-11-23 00:52:05.552640 | orchestrator | Sunday 23 November 2025 00:44:26 +0000 (0:00:00.802) 0:02:40.722 ******* 2025-11-23 00:52:05.552646 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552652 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552658 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552664 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552671 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552677 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552683 | orchestrator | 2025-11-23 00:52:05.552689 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-11-23 00:52:05.552695 | orchestrator | Sunday 23 November 2025 00:44:27 +0000 (0:00:00.487) 0:02:41.210 ******* 2025-11-23 00:52:05.552702 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552708 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552714 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552720 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552726 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552733 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552739 | orchestrator | 2025-11-23 00:52:05.552745 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-11-23 00:52:05.552752 | orchestrator | Sunday 23 November 2025 00:44:27 +0000 (0:00:00.663) 0:02:41.874 ******* 2025-11-23 00:52:05.552758 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552764 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552770 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552776 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552782 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552789 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552803 | orchestrator | 2025-11-23 00:52:05.552809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-11-23 00:52:05.552815 | orchestrator | Sunday 23 November 2025 00:44:28 +0000 (0:00:00.599) 0:02:42.473 ******* 2025-11-23 00:52:05.552822 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552828 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552834 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552840 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552846 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552853 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552859 | orchestrator | 2025-11-23 00:52:05.552865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-11-23 00:52:05.552871 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.702) 0:02:43.176 ******* 2025-11-23 00:52:05.552878 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.552884 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.552890 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.552896 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.552903 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.552913 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.552922 | orchestrator | 2025-11-23 00:52:05.552928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-11-23 00:52:05.552934 | orchestrator | Sunday 23 November 2025 00:44:29 +0000 (0:00:00.508) 0:02:43.684 ******* 2025-11-23 00:52:05.552941 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.552947 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.552953 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.552959 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.552965 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.552971 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.552977 | orchestrator | 2025-11-23 00:52:05.552983 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-11-23 00:52:05.552989 | orchestrator | Sunday 23 November 2025 00:44:30 +0000 (0:00:01.048) 0:02:44.733 ******* 2025-11-23 00:52:05.552996 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.553002 | orchestrator | 2025-11-23 00:52:05.553008 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-11-23 00:52:05.553014 | orchestrator | Sunday 23 November 2025 00:44:31 +0000 (0:00:00.939) 0:02:45.673 ******* 2025-11-23 00:52:05.553020 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-11-23 00:52:05.553026 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-11-23 00:52:05.553036 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-11-23 00:52:05.553043 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-11-23 00:52:05.553049 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-11-23 00:52:05.553055 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-11-23 00:52:05.553061 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-11-23 00:52:05.553067 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-11-23 00:52:05.553073 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-11-23 00:52:05.553079 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-11-23 00:52:05.553085 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-11-23 00:52:05.553091 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-11-23 00:52:05.553097 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-11-23 00:52:05.553103 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-11-23 00:52:05.553109 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-11-23 00:52:05.553115 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-11-23 00:52:05.553121 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-11-23 00:52:05.553131 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-11-23 00:52:05.553156 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-11-23 00:52:05.553163 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-11-23 00:52:05.553170 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-11-23 00:52:05.553176 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-11-23 00:52:05.553182 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-11-23 00:52:05.553188 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-11-23 00:52:05.553194 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-11-23 00:52:05.553200 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-11-23 00:52:05.553206 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-11-23 00:52:05.553212 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-11-23 00:52:05.553219 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-11-23 00:52:05.553225 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-11-23 00:52:05.553231 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-11-23 00:52:05.553237 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-11-23 00:52:05.553243 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-11-23 00:52:05.553249 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-11-23 00:52:05.553255 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-11-23 00:52:05.553261 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-11-23 00:52:05.553268 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-11-23 00:52:05.553274 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-11-23 00:52:05.553280 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-11-23 00:52:05.553286 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-11-23 00:52:05.553292 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-11-23 00:52:05.553299 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-11-23 00:52:05.553305 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-11-23 00:52:05.553311 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-11-23 00:52:05.553317 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-11-23 00:52:05.553323 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-23 00:52:05.553329 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-23 00:52:05.553335 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-23 00:52:05.553341 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-11-23 00:52:05.553348 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-11-23 00:52:05.553354 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-11-23 00:52:05.553360 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-23 00:52:05.553366 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-23 00:52:05.553372 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-23 00:52:05.553396 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-23 00:52:05.553408 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-23 00:52:05.553418 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-23 00:52:05.553428 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-23 00:52:05.553437 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-23 00:52:05.553448 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-23 00:52:05.553454 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-23 00:52:05.553460 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-23 00:52:05.553470 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-23 00:52:05.553476 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-23 00:52:05.553482 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-23 00:52:05.553488 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-23 00:52:05.553495 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-23 00:52:05.553501 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-23 00:52:05.553507 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-23 00:52:05.553513 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-23 00:52:05.553519 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-23 00:52:05.553525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-23 00:52:05.553533 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-23 00:52:05.553544 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-23 00:52:05.553553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-23 00:52:05.553564 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-23 00:52:05.553601 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-23 00:52:05.553609 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-23 00:52:05.553615 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-23 00:52:05.553622 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-23 00:52:05.553628 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-23 00:52:05.553634 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-11-23 00:52:05.553643 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-11-23 00:52:05.553653 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-23 00:52:05.553663 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-11-23 00:52:05.553673 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-23 00:52:05.553683 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-23 00:52:05.553693 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-11-23 00:52:05.553704 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-11-23 00:52:05.553715 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-11-23 00:52:05.553725 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-11-23 00:52:05.553736 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-11-23 00:52:05.553747 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-11-23 00:52:05.553758 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-11-23 00:52:05.553769 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-11-23 00:52:05.553779 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-11-23 00:52:05.553790 | orchestrator | 2025-11-23 00:52:05.553801 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-11-23 00:52:05.553811 | orchestrator | Sunday 23 November 2025 00:44:38 +0000 (0:00:06.761) 0:02:52.434 ******* 2025-11-23 00:52:05.553822 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.553836 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.553842 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.553849 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.553855 | orchestrator | 2025-11-23 00:52:05.553861 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-11-23 00:52:05.553867 | orchestrator | Sunday 23 November 2025 00:44:39 +0000 (0:00:00.655) 0:02:53.090 ******* 2025-11-23 00:52:05.553873 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.553880 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.553887 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.553893 | orchestrator | 2025-11-23 00:52:05.553899 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-11-23 00:52:05.553905 | orchestrator | Sunday 23 November 2025 00:44:40 +0000 (0:00:00.854) 0:02:53.945 ******* 2025-11-23 00:52:05.553912 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.553918 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.553924 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.553930 | orchestrator | 2025-11-23 00:52:05.553937 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-11-23 00:52:05.553947 | orchestrator | Sunday 23 November 2025 00:44:41 +0000 (0:00:01.440) 0:02:55.385 ******* 2025-11-23 00:52:05.553954 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.553960 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.553966 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.553972 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.553978 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.553984 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.553991 | orchestrator | 2025-11-23 00:52:05.553997 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-11-23 00:52:05.554003 | orchestrator | Sunday 23 November 2025 00:44:42 +0000 (0:00:00.708) 0:02:56.094 ******* 2025-11-23 00:52:05.554009 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.554041 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.554048 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.554054 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554060 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554066 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554072 | orchestrator | 2025-11-23 00:52:05.554078 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-11-23 00:52:05.554084 | orchestrator | Sunday 23 November 2025 00:44:43 +0000 (0:00:00.971) 0:02:57.066 ******* 2025-11-23 00:52:05.554090 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554096 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554106 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554117 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554128 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554138 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554148 | orchestrator | 2025-11-23 00:52:05.554193 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-11-23 00:52:05.554206 | orchestrator | Sunday 23 November 2025 00:44:43 +0000 (0:00:00.600) 0:02:57.668 ******* 2025-11-23 00:52:05.554216 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554225 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554237 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554244 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554250 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554256 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554262 | orchestrator | 2025-11-23 00:52:05.554268 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-11-23 00:52:05.554274 | orchestrator | Sunday 23 November 2025 00:44:44 +0000 (0:00:01.000) 0:02:58.668 ******* 2025-11-23 00:52:05.554280 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554286 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554292 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554298 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554304 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554310 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554316 | orchestrator | 2025-11-23 00:52:05.554323 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-11-23 00:52:05.554329 | orchestrator | Sunday 23 November 2025 00:44:45 +0000 (0:00:00.723) 0:02:59.392 ******* 2025-11-23 00:52:05.554335 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554341 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554347 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554353 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554359 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554365 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554371 | orchestrator | 2025-11-23 00:52:05.554418 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-11-23 00:52:05.554426 | orchestrator | Sunday 23 November 2025 00:44:46 +0000 (0:00:00.641) 0:03:00.034 ******* 2025-11-23 00:52:05.554432 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554438 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554444 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554450 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554456 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554462 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554469 | orchestrator | 2025-11-23 00:52:05.554475 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-11-23 00:52:05.554481 | orchestrator | Sunday 23 November 2025 00:44:46 +0000 (0:00:00.791) 0:03:00.825 ******* 2025-11-23 00:52:05.554487 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554493 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554499 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554505 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554511 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554517 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554523 | orchestrator | 2025-11-23 00:52:05.554529 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-11-23 00:52:05.554535 | orchestrator | Sunday 23 November 2025 00:44:47 +0000 (0:00:00.933) 0:03:01.759 ******* 2025-11-23 00:52:05.554541 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554547 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554554 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554560 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.554566 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.554572 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.554578 | orchestrator | 2025-11-23 00:52:05.554584 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-11-23 00:52:05.554590 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:03.170) 0:03:04.930 ******* 2025-11-23 00:52:05.554596 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.554602 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.554608 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.554615 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554627 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554633 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554639 | orchestrator | 2025-11-23 00:52:05.554646 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-11-23 00:52:05.554652 | orchestrator | Sunday 23 November 2025 00:44:51 +0000 (0:00:00.832) 0:03:05.762 ******* 2025-11-23 00:52:05.554658 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.554664 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.554670 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.554676 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554686 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554693 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554699 | orchestrator | 2025-11-23 00:52:05.554705 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-11-23 00:52:05.554711 | orchestrator | Sunday 23 November 2025 00:44:52 +0000 (0:00:01.083) 0:03:06.846 ******* 2025-11-23 00:52:05.554717 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554723 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554729 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554735 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554741 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554747 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554753 | orchestrator | 2025-11-23 00:52:05.554759 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-11-23 00:52:05.554765 | orchestrator | Sunday 23 November 2025 00:44:53 +0000 (0:00:00.993) 0:03:07.840 ******* 2025-11-23 00:52:05.554771 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.554777 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.554784 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.554790 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554819 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554827 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554833 | orchestrator | 2025-11-23 00:52:05.554839 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-11-23 00:52:05.554845 | orchestrator | Sunday 23 November 2025 00:44:54 +0000 (0:00:01.048) 0:03:08.889 ******* 2025-11-23 00:52:05.554853 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-11-23 00:52:05.554862 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-11-23 00:52:05.554870 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-11-23 00:52:05.554876 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554882 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-11-23 00:52:05.554897 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-11-23 00:52:05.554903 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554909 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-11-23 00:52:05.554916 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554922 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554928 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554934 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554940 | orchestrator | 2025-11-23 00:52:05.554945 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-11-23 00:52:05.554951 | orchestrator | Sunday 23 November 2025 00:44:56 +0000 (0:00:01.034) 0:03:09.923 ******* 2025-11-23 00:52:05.554956 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.554962 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.554967 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.554972 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.554978 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.554983 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.554988 | orchestrator | 2025-11-23 00:52:05.554993 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-11-23 00:52:05.554999 | orchestrator | Sunday 23 November 2025 00:44:56 +0000 (0:00:00.620) 0:03:10.543 ******* 2025-11-23 00:52:05.555004 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555009 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.555015 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.555023 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555028 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555034 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555039 | orchestrator | 2025-11-23 00:52:05.555044 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-23 00:52:05.555050 | orchestrator | Sunday 23 November 2025 00:44:57 +0000 (0:00:00.601) 0:03:11.145 ******* 2025-11-23 00:52:05.555055 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555060 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.555065 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.555073 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555082 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555091 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555100 | orchestrator | 2025-11-23 00:52:05.555109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-23 00:52:05.555119 | orchestrator | Sunday 23 November 2025 00:44:57 +0000 (0:00:00.478) 0:03:11.624 ******* 2025-11-23 00:52:05.555125 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555130 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.555136 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.555141 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555146 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555152 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555157 | orchestrator | 2025-11-23 00:52:05.555162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-23 00:52:05.555186 | orchestrator | Sunday 23 November 2025 00:44:58 +0000 (0:00:00.595) 0:03:12.219 ******* 2025-11-23 00:52:05.555193 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555198 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.555204 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.555214 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555220 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555225 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555230 | orchestrator | 2025-11-23 00:52:05.555236 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-23 00:52:05.555241 | orchestrator | Sunday 23 November 2025 00:44:59 +0000 (0:00:00.708) 0:03:12.928 ******* 2025-11-23 00:52:05.555247 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.555252 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.555257 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.555263 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555268 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555273 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555279 | orchestrator | 2025-11-23 00:52:05.555284 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-23 00:52:05.555290 | orchestrator | Sunday 23 November 2025 00:44:59 +0000 (0:00:00.822) 0:03:13.750 ******* 2025-11-23 00:52:05.555295 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.555301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.555306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.555311 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555317 | orchestrator | 2025-11-23 00:52:05.555322 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-23 00:52:05.555327 | orchestrator | Sunday 23 November 2025 00:45:00 +0000 (0:00:00.399) 0:03:14.149 ******* 2025-11-23 00:52:05.555333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.555338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.555344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.555349 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555354 | orchestrator | 2025-11-23 00:52:05.555360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-23 00:52:05.555365 | orchestrator | Sunday 23 November 2025 00:45:00 +0000 (0:00:00.407) 0:03:14.557 ******* 2025-11-23 00:52:05.555370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.555376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.555398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.555404 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555409 | orchestrator | 2025-11-23 00:52:05.555415 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-23 00:52:05.555420 | orchestrator | Sunday 23 November 2025 00:45:01 +0000 (0:00:00.372) 0:03:14.930 ******* 2025-11-23 00:52:05.555426 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.555431 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.555436 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.555442 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555447 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555453 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555458 | orchestrator | 2025-11-23 00:52:05.555464 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-23 00:52:05.555469 | orchestrator | Sunday 23 November 2025 00:45:01 +0000 (0:00:00.775) 0:03:15.705 ******* 2025-11-23 00:52:05.555474 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-23 00:52:05.555480 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-23 00:52:05.555485 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-23 00:52:05.555491 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-11-23 00:52:05.555496 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555502 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-11-23 00:52:05.555507 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555512 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-11-23 00:52:05.555522 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555528 | orchestrator | 2025-11-23 00:52:05.555533 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-11-23 00:52:05.555538 | orchestrator | Sunday 23 November 2025 00:45:03 +0000 (0:00:02.153) 0:03:17.859 ******* 2025-11-23 00:52:05.555544 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.555549 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.555554 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.555565 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.555574 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.555582 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.555591 | orchestrator | 2025-11-23 00:52:05.555600 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-23 00:52:05.555609 | orchestrator | Sunday 23 November 2025 00:45:06 +0000 (0:00:02.559) 0:03:20.418 ******* 2025-11-23 00:52:05.555618 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.555627 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.555634 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.555640 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.555645 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.555650 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.555655 | orchestrator | 2025-11-23 00:52:05.555661 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-23 00:52:05.555666 | orchestrator | Sunday 23 November 2025 00:45:07 +0000 (0:00:01.070) 0:03:21.488 ******* 2025-11-23 00:52:05.555672 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555677 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.555682 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.555688 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.555693 | orchestrator | 2025-11-23 00:52:05.555699 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-23 00:52:05.555724 | orchestrator | Sunday 23 November 2025 00:45:08 +0000 (0:00:00.747) 0:03:22.236 ******* 2025-11-23 00:52:05.555731 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.555736 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.555742 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.555747 | orchestrator | 2025-11-23 00:52:05.555753 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-23 00:52:05.555758 | orchestrator | Sunday 23 November 2025 00:45:08 +0000 (0:00:00.260) 0:03:22.496 ******* 2025-11-23 00:52:05.555763 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.555769 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.555774 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.555780 | orchestrator | 2025-11-23 00:52:05.555785 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-23 00:52:05.555791 | orchestrator | Sunday 23 November 2025 00:45:09 +0000 (0:00:01.066) 0:03:23.563 ******* 2025-11-23 00:52:05.555796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-23 00:52:05.555801 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-23 00:52:05.555807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-23 00:52:05.555816 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555824 | orchestrator | 2025-11-23 00:52:05.555834 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-23 00:52:05.555842 | orchestrator | Sunday 23 November 2025 00:45:10 +0000 (0:00:00.879) 0:03:24.442 ******* 2025-11-23 00:52:05.555851 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.555857 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.555862 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.555867 | orchestrator | 2025-11-23 00:52:05.555873 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-23 00:52:05.555878 | orchestrator | Sunday 23 November 2025 00:45:10 +0000 (0:00:00.319) 0:03:24.762 ******* 2025-11-23 00:52:05.555894 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.555900 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.555905 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.555911 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.555916 | orchestrator | 2025-11-23 00:52:05.555922 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-23 00:52:05.555927 | orchestrator | Sunday 23 November 2025 00:45:11 +0000 (0:00:00.831) 0:03:25.593 ******* 2025-11-23 00:52:05.555932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.555938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.555943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.555949 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555954 | orchestrator | 2025-11-23 00:52:05.555960 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-23 00:52:05.555965 | orchestrator | Sunday 23 November 2025 00:45:12 +0000 (0:00:00.378) 0:03:25.971 ******* 2025-11-23 00:52:05.555971 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.555976 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.555982 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.555987 | orchestrator | 2025-11-23 00:52:05.555992 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-23 00:52:05.555998 | orchestrator | Sunday 23 November 2025 00:45:12 +0000 (0:00:00.243) 0:03:26.215 ******* 2025-11-23 00:52:05.556003 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556009 | orchestrator | 2025-11-23 00:52:05.556014 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-23 00:52:05.556020 | orchestrator | Sunday 23 November 2025 00:45:12 +0000 (0:00:00.177) 0:03:26.392 ******* 2025-11-23 00:52:05.556025 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556030 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.556036 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.556041 | orchestrator | 2025-11-23 00:52:05.556047 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-23 00:52:05.556052 | orchestrator | Sunday 23 November 2025 00:45:12 +0000 (0:00:00.295) 0:03:26.687 ******* 2025-11-23 00:52:05.556057 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556063 | orchestrator | 2025-11-23 00:52:05.556068 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-23 00:52:05.556074 | orchestrator | Sunday 23 November 2025 00:45:12 +0000 (0:00:00.197) 0:03:26.885 ******* 2025-11-23 00:52:05.556079 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556085 | orchestrator | 2025-11-23 00:52:05.556094 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-23 00:52:05.556099 | orchestrator | Sunday 23 November 2025 00:45:13 +0000 (0:00:00.190) 0:03:27.076 ******* 2025-11-23 00:52:05.556105 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556110 | orchestrator | 2025-11-23 00:52:05.556116 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-23 00:52:05.556121 | orchestrator | Sunday 23 November 2025 00:45:13 +0000 (0:00:00.112) 0:03:27.188 ******* 2025-11-23 00:52:05.556126 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556132 | orchestrator | 2025-11-23 00:52:05.556137 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-23 00:52:05.556143 | orchestrator | Sunday 23 November 2025 00:45:13 +0000 (0:00:00.507) 0:03:27.696 ******* 2025-11-23 00:52:05.556148 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556154 | orchestrator | 2025-11-23 00:52:05.556159 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-23 00:52:05.556165 | orchestrator | Sunday 23 November 2025 00:45:13 +0000 (0:00:00.186) 0:03:27.882 ******* 2025-11-23 00:52:05.556174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.556180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.556185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.556191 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556196 | orchestrator | 2025-11-23 00:52:05.556202 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-23 00:52:05.556225 | orchestrator | Sunday 23 November 2025 00:45:14 +0000 (0:00:00.329) 0:03:28.212 ******* 2025-11-23 00:52:05.556231 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556237 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.556242 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.556250 | orchestrator | 2025-11-23 00:52:05.556259 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-23 00:52:05.556268 | orchestrator | Sunday 23 November 2025 00:45:14 +0000 (0:00:00.291) 0:03:28.503 ******* 2025-11-23 00:52:05.556277 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556286 | orchestrator | 2025-11-23 00:52:05.556295 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-23 00:52:05.556304 | orchestrator | Sunday 23 November 2025 00:45:14 +0000 (0:00:00.203) 0:03:28.707 ******* 2025-11-23 00:52:05.556313 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556321 | orchestrator | 2025-11-23 00:52:05.556330 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-23 00:52:05.556336 | orchestrator | Sunday 23 November 2025 00:45:14 +0000 (0:00:00.183) 0:03:28.890 ******* 2025-11-23 00:52:05.556341 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.556347 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.556352 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.556357 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.556363 | orchestrator | 2025-11-23 00:52:05.556368 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-23 00:52:05.556374 | orchestrator | Sunday 23 November 2025 00:45:15 +0000 (0:00:00.870) 0:03:29.761 ******* 2025-11-23 00:52:05.556395 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.556401 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.556407 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.556412 | orchestrator | 2025-11-23 00:52:05.556418 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-23 00:52:05.556423 | orchestrator | Sunday 23 November 2025 00:45:16 +0000 (0:00:00.305) 0:03:30.067 ******* 2025-11-23 00:52:05.556428 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.556434 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.556439 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.556444 | orchestrator | 2025-11-23 00:52:05.556450 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-23 00:52:05.556455 | orchestrator | Sunday 23 November 2025 00:45:17 +0000 (0:00:01.057) 0:03:31.124 ******* 2025-11-23 00:52:05.556461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.556466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.556472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.556477 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556482 | orchestrator | 2025-11-23 00:52:05.556488 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-23 00:52:05.556493 | orchestrator | Sunday 23 November 2025 00:45:17 +0000 (0:00:00.711) 0:03:31.836 ******* 2025-11-23 00:52:05.556498 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.556504 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.556509 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.556515 | orchestrator | 2025-11-23 00:52:05.556520 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-23 00:52:05.556525 | orchestrator | Sunday 23 November 2025 00:45:18 +0000 (0:00:00.554) 0:03:32.391 ******* 2025-11-23 00:52:05.556536 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.556541 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.556546 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.556552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.556557 | orchestrator | 2025-11-23 00:52:05.556562 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-23 00:52:05.556568 | orchestrator | Sunday 23 November 2025 00:45:19 +0000 (0:00:00.726) 0:03:33.117 ******* 2025-11-23 00:52:05.556573 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.556578 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.556584 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.556589 | orchestrator | 2025-11-23 00:52:05.556594 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-23 00:52:05.556600 | orchestrator | Sunday 23 November 2025 00:45:19 +0000 (0:00:00.427) 0:03:33.545 ******* 2025-11-23 00:52:05.556605 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.556610 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.556631 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.556636 | orchestrator | 2025-11-23 00:52:05.556642 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-23 00:52:05.556647 | orchestrator | Sunday 23 November 2025 00:45:20 +0000 (0:00:01.040) 0:03:34.585 ******* 2025-11-23 00:52:05.556653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.556658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.556663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.556668 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556674 | orchestrator | 2025-11-23 00:52:05.556679 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-23 00:52:05.556685 | orchestrator | Sunday 23 November 2025 00:45:21 +0000 (0:00:00.565) 0:03:35.151 ******* 2025-11-23 00:52:05.556690 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.556695 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.556701 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.556706 | orchestrator | 2025-11-23 00:52:05.556711 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-11-23 00:52:05.556717 | orchestrator | Sunday 23 November 2025 00:45:21 +0000 (0:00:00.312) 0:03:35.464 ******* 2025-11-23 00:52:05.556722 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556727 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.556733 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.556738 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.556744 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.556769 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.556776 | orchestrator | 2025-11-23 00:52:05.556781 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-23 00:52:05.556787 | orchestrator | Sunday 23 November 2025 00:45:22 +0000 (0:00:00.674) 0:03:36.139 ******* 2025-11-23 00:52:05.556792 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.556797 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.556802 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.556808 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.556813 | orchestrator | 2025-11-23 00:52:05.556818 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-23 00:52:05.556824 | orchestrator | Sunday 23 November 2025 00:45:23 +0000 (0:00:00.767) 0:03:36.907 ******* 2025-11-23 00:52:05.556829 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.556834 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.556839 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.556845 | orchestrator | 2025-11-23 00:52:05.556850 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-23 00:52:05.556860 | orchestrator | Sunday 23 November 2025 00:45:23 +0000 (0:00:00.465) 0:03:37.372 ******* 2025-11-23 00:52:05.556865 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.556870 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.556876 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.556881 | orchestrator | 2025-11-23 00:52:05.556886 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-23 00:52:05.556892 | orchestrator | Sunday 23 November 2025 00:45:24 +0000 (0:00:01.232) 0:03:38.605 ******* 2025-11-23 00:52:05.556897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-23 00:52:05.556902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-23 00:52:05.556907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-23 00:52:05.556913 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.556918 | orchestrator | 2025-11-23 00:52:05.556923 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-23 00:52:05.556929 | orchestrator | Sunday 23 November 2025 00:45:25 +0000 (0:00:00.625) 0:03:39.230 ******* 2025-11-23 00:52:05.556934 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.556939 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.556944 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.556950 | orchestrator | 2025-11-23 00:52:05.556955 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-11-23 00:52:05.556960 | orchestrator | 2025-11-23 00:52:05.556965 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-23 00:52:05.556971 | orchestrator | Sunday 23 November 2025 00:45:25 +0000 (0:00:00.603) 0:03:39.833 ******* 2025-11-23 00:52:05.556976 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.556982 | orchestrator | 2025-11-23 00:52:05.556987 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-23 00:52:05.556992 | orchestrator | Sunday 23 November 2025 00:45:26 +0000 (0:00:00.606) 0:03:40.439 ******* 2025-11-23 00:52:05.556997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.557003 | orchestrator | 2025-11-23 00:52:05.557008 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-23 00:52:05.557013 | orchestrator | Sunday 23 November 2025 00:45:27 +0000 (0:00:00.466) 0:03:40.906 ******* 2025-11-23 00:52:05.557019 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557024 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557029 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557035 | orchestrator | 2025-11-23 00:52:05.557040 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-23 00:52:05.557045 | orchestrator | Sunday 23 November 2025 00:45:27 +0000 (0:00:00.856) 0:03:41.762 ******* 2025-11-23 00:52:05.557050 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557056 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557061 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557066 | orchestrator | 2025-11-23 00:52:05.557071 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-23 00:52:05.557077 | orchestrator | Sunday 23 November 2025 00:45:28 +0000 (0:00:00.305) 0:03:42.067 ******* 2025-11-23 00:52:05.557082 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557090 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557096 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557101 | orchestrator | 2025-11-23 00:52:05.557106 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-23 00:52:05.557111 | orchestrator | Sunday 23 November 2025 00:45:28 +0000 (0:00:00.306) 0:03:42.374 ******* 2025-11-23 00:52:05.557117 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557122 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557132 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557137 | orchestrator | 2025-11-23 00:52:05.557142 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-23 00:52:05.557148 | orchestrator | Sunday 23 November 2025 00:45:28 +0000 (0:00:00.339) 0:03:42.713 ******* 2025-11-23 00:52:05.557153 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557158 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557163 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557169 | orchestrator | 2025-11-23 00:52:05.557174 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-23 00:52:05.557179 | orchestrator | Sunday 23 November 2025 00:45:29 +0000 (0:00:00.980) 0:03:43.693 ******* 2025-11-23 00:52:05.557184 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557190 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557195 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557200 | orchestrator | 2025-11-23 00:52:05.557205 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-23 00:52:05.557211 | orchestrator | Sunday 23 November 2025 00:45:30 +0000 (0:00:00.301) 0:03:43.995 ******* 2025-11-23 00:52:05.557238 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557245 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557251 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557256 | orchestrator | 2025-11-23 00:52:05.557261 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-23 00:52:05.557266 | orchestrator | Sunday 23 November 2025 00:45:30 +0000 (0:00:00.263) 0:03:44.258 ******* 2025-11-23 00:52:05.557272 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557277 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557283 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557288 | orchestrator | 2025-11-23 00:52:05.557293 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-23 00:52:05.557299 | orchestrator | Sunday 23 November 2025 00:45:31 +0000 (0:00:00.664) 0:03:44.923 ******* 2025-11-23 00:52:05.557304 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557309 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557315 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557320 | orchestrator | 2025-11-23 00:52:05.557325 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-23 00:52:05.557331 | orchestrator | Sunday 23 November 2025 00:45:31 +0000 (0:00:00.846) 0:03:45.769 ******* 2025-11-23 00:52:05.557336 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557341 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557347 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557352 | orchestrator | 2025-11-23 00:52:05.557357 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-23 00:52:05.557363 | orchestrator | Sunday 23 November 2025 00:45:32 +0000 (0:00:00.270) 0:03:46.040 ******* 2025-11-23 00:52:05.557368 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557373 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557392 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557398 | orchestrator | 2025-11-23 00:52:05.557403 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-23 00:52:05.557409 | orchestrator | Sunday 23 November 2025 00:45:32 +0000 (0:00:00.309) 0:03:46.349 ******* 2025-11-23 00:52:05.557414 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557420 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557425 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557431 | orchestrator | 2025-11-23 00:52:05.557436 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-23 00:52:05.557442 | orchestrator | Sunday 23 November 2025 00:45:32 +0000 (0:00:00.268) 0:03:46.618 ******* 2025-11-23 00:52:05.557447 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557452 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557458 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557463 | orchestrator | 2025-11-23 00:52:05.557473 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-23 00:52:05.557479 | orchestrator | Sunday 23 November 2025 00:45:32 +0000 (0:00:00.254) 0:03:46.873 ******* 2025-11-23 00:52:05.557484 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557490 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557495 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557501 | orchestrator | 2025-11-23 00:52:05.557506 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-23 00:52:05.557511 | orchestrator | Sunday 23 November 2025 00:45:33 +0000 (0:00:00.411) 0:03:47.284 ******* 2025-11-23 00:52:05.557517 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557522 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557528 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557533 | orchestrator | 2025-11-23 00:52:05.557538 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-23 00:52:05.557544 | orchestrator | Sunday 23 November 2025 00:45:33 +0000 (0:00:00.266) 0:03:47.551 ******* 2025-11-23 00:52:05.557549 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557555 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.557560 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.557566 | orchestrator | 2025-11-23 00:52:05.557571 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-23 00:52:05.557576 | orchestrator | Sunday 23 November 2025 00:45:33 +0000 (0:00:00.270) 0:03:47.821 ******* 2025-11-23 00:52:05.557582 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557587 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557592 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557598 | orchestrator | 2025-11-23 00:52:05.557603 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-23 00:52:05.557609 | orchestrator | Sunday 23 November 2025 00:45:34 +0000 (0:00:00.275) 0:03:48.097 ******* 2025-11-23 00:52:05.557614 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557619 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557624 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557630 | orchestrator | 2025-11-23 00:52:05.557638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-23 00:52:05.557644 | orchestrator | Sunday 23 November 2025 00:45:34 +0000 (0:00:00.453) 0:03:48.551 ******* 2025-11-23 00:52:05.557649 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557655 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557660 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557665 | orchestrator | 2025-11-23 00:52:05.557671 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-11-23 00:52:05.557676 | orchestrator | Sunday 23 November 2025 00:45:35 +0000 (0:00:00.544) 0:03:49.095 ******* 2025-11-23 00:52:05.557681 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557687 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557692 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557697 | orchestrator | 2025-11-23 00:52:05.557703 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-11-23 00:52:05.557708 | orchestrator | Sunday 23 November 2025 00:45:35 +0000 (0:00:00.293) 0:03:49.388 ******* 2025-11-23 00:52:05.557714 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.557719 | orchestrator | 2025-11-23 00:52:05.557724 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-11-23 00:52:05.557730 | orchestrator | Sunday 23 November 2025 00:45:36 +0000 (0:00:00.658) 0:03:50.047 ******* 2025-11-23 00:52:05.557735 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.557741 | orchestrator | 2025-11-23 00:52:05.557764 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-11-23 00:52:05.557770 | orchestrator | Sunday 23 November 2025 00:45:36 +0000 (0:00:00.129) 0:03:50.176 ******* 2025-11-23 00:52:05.557775 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-23 00:52:05.557786 | orchestrator | 2025-11-23 00:52:05.557792 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-11-23 00:52:05.557797 | orchestrator | Sunday 23 November 2025 00:45:37 +0000 (0:00:00.913) 0:03:51.090 ******* 2025-11-23 00:52:05.557803 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557808 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557814 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557819 | orchestrator | 2025-11-23 00:52:05.557824 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-11-23 00:52:05.557830 | orchestrator | Sunday 23 November 2025 00:45:37 +0000 (0:00:00.317) 0:03:51.407 ******* 2025-11-23 00:52:05.557835 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557840 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557845 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557851 | orchestrator | 2025-11-23 00:52:05.557856 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-11-23 00:52:05.557861 | orchestrator | Sunday 23 November 2025 00:45:37 +0000 (0:00:00.307) 0:03:51.714 ******* 2025-11-23 00:52:05.557867 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.557872 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.557877 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.557883 | orchestrator | 2025-11-23 00:52:05.557888 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-11-23 00:52:05.557893 | orchestrator | Sunday 23 November 2025 00:45:39 +0000 (0:00:01.273) 0:03:52.988 ******* 2025-11-23 00:52:05.557899 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.557904 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.557909 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.557914 | orchestrator | 2025-11-23 00:52:05.557920 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-11-23 00:52:05.557925 | orchestrator | Sunday 23 November 2025 00:45:39 +0000 (0:00:00.819) 0:03:53.808 ******* 2025-11-23 00:52:05.557930 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.557936 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.557941 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.557946 | orchestrator | 2025-11-23 00:52:05.557951 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-11-23 00:52:05.557957 | orchestrator | Sunday 23 November 2025 00:45:40 +0000 (0:00:00.639) 0:03:54.447 ******* 2025-11-23 00:52:05.557962 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.557967 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.557973 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.557978 | orchestrator | 2025-11-23 00:52:05.557983 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-11-23 00:52:05.557989 | orchestrator | Sunday 23 November 2025 00:45:41 +0000 (0:00:00.658) 0:03:55.106 ******* 2025-11-23 00:52:05.557994 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.557999 | orchestrator | 2025-11-23 00:52:05.558004 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-11-23 00:52:05.558010 | orchestrator | Sunday 23 November 2025 00:45:42 +0000 (0:00:01.632) 0:03:56.739 ******* 2025-11-23 00:52:05.558035 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.558041 | orchestrator | 2025-11-23 00:52:05.558046 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-11-23 00:52:05.558052 | orchestrator | Sunday 23 November 2025 00:45:43 +0000 (0:00:01.062) 0:03:57.801 ******* 2025-11-23 00:52:05.558057 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.558062 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-23 00:52:05.558068 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.558073 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:52:05.558078 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-11-23 00:52:05.558084 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:52:05.558096 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:52:05.558101 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-11-23 00:52:05.558107 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:52:05.558112 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-11-23 00:52:05.558121 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-11-23 00:52:05.558126 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-11-23 00:52:05.558132 | orchestrator | 2025-11-23 00:52:05.558137 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-11-23 00:52:05.558142 | orchestrator | Sunday 23 November 2025 00:45:47 +0000 (0:00:03.116) 0:04:00.917 ******* 2025-11-23 00:52:05.558147 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.558153 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.558158 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.558163 | orchestrator | 2025-11-23 00:52:05.558169 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-11-23 00:52:05.558174 | orchestrator | Sunday 23 November 2025 00:45:48 +0000 (0:00:01.142) 0:04:02.060 ******* 2025-11-23 00:52:05.558179 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.558185 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.558190 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.558195 | orchestrator | 2025-11-23 00:52:05.558201 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-11-23 00:52:05.558206 | orchestrator | Sunday 23 November 2025 00:45:48 +0000 (0:00:00.275) 0:04:02.335 ******* 2025-11-23 00:52:05.558211 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.558216 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.558222 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.558227 | orchestrator | 2025-11-23 00:52:05.558232 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-11-23 00:52:05.558238 | orchestrator | Sunday 23 November 2025 00:45:48 +0000 (0:00:00.476) 0:04:02.812 ******* 2025-11-23 00:52:05.558260 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.558267 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.558272 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.558278 | orchestrator | 2025-11-23 00:52:05.558283 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-11-23 00:52:05.558288 | orchestrator | Sunday 23 November 2025 00:45:50 +0000 (0:00:01.454) 0:04:04.266 ******* 2025-11-23 00:52:05.558293 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.558299 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.558304 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.558309 | orchestrator | 2025-11-23 00:52:05.558314 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-11-23 00:52:05.558320 | orchestrator | Sunday 23 November 2025 00:45:51 +0000 (0:00:01.233) 0:04:05.500 ******* 2025-11-23 00:52:05.558325 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.558330 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.558335 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.558341 | orchestrator | 2025-11-23 00:52:05.558346 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-11-23 00:52:05.558351 | orchestrator | Sunday 23 November 2025 00:45:51 +0000 (0:00:00.309) 0:04:05.810 ******* 2025-11-23 00:52:05.558357 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.558362 | orchestrator | 2025-11-23 00:52:05.558367 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-11-23 00:52:05.558373 | orchestrator | Sunday 23 November 2025 00:45:52 +0000 (0:00:00.702) 0:04:06.512 ******* 2025-11-23 00:52:05.558473 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.558496 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.558502 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.558507 | orchestrator | 2025-11-23 00:52:05.558521 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-11-23 00:52:05.558526 | orchestrator | Sunday 23 November 2025 00:45:52 +0000 (0:00:00.362) 0:04:06.875 ******* 2025-11-23 00:52:05.558532 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.558537 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.558542 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.558548 | orchestrator | 2025-11-23 00:52:05.558553 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-11-23 00:52:05.558558 | orchestrator | Sunday 23 November 2025 00:45:53 +0000 (0:00:00.423) 0:04:07.298 ******* 2025-11-23 00:52:05.558564 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-11-23 00:52:05.558569 | orchestrator | 2025-11-23 00:52:05.558575 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-11-23 00:52:05.558580 | orchestrator | Sunday 23 November 2025 00:45:54 +0000 (0:00:00.756) 0:04:08.055 ******* 2025-11-23 00:52:05.558586 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.558590 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.558595 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.558600 | orchestrator | 2025-11-23 00:52:05.558605 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-11-23 00:52:05.558610 | orchestrator | Sunday 23 November 2025 00:45:56 +0000 (0:00:01.895) 0:04:09.951 ******* 2025-11-23 00:52:05.558614 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.558619 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.558624 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.558629 | orchestrator | 2025-11-23 00:52:05.558633 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-11-23 00:52:05.558638 | orchestrator | Sunday 23 November 2025 00:45:57 +0000 (0:00:01.289) 0:04:11.241 ******* 2025-11-23 00:52:05.558643 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.558648 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.558652 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.558657 | orchestrator | 2025-11-23 00:52:05.558662 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-11-23 00:52:05.558666 | orchestrator | Sunday 23 November 2025 00:45:59 +0000 (0:00:01.843) 0:04:13.084 ******* 2025-11-23 00:52:05.558671 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.558676 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.558681 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.558685 | orchestrator | 2025-11-23 00:52:05.558690 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-11-23 00:52:05.558695 | orchestrator | Sunday 23 November 2025 00:46:01 +0000 (0:00:02.301) 0:04:15.386 ******* 2025-11-23 00:52:05.558703 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.558708 | orchestrator | 2025-11-23 00:52:05.558713 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-11-23 00:52:05.558718 | orchestrator | Sunday 23 November 2025 00:46:02 +0000 (0:00:00.566) 0:04:15.953 ******* 2025-11-23 00:52:05.558723 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.558728 | orchestrator | 2025-11-23 00:52:05.558732 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-11-23 00:52:05.558737 | orchestrator | Sunday 23 November 2025 00:46:03 +0000 (0:00:01.427) 0:04:17.380 ******* 2025-11-23 00:52:05.558742 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.558746 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.558751 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.558756 | orchestrator | 2025-11-23 00:52:05.558761 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-11-23 00:52:05.558766 | orchestrator | Sunday 23 November 2025 00:46:13 +0000 (0:00:09.649) 0:04:27.030 ******* 2025-11-23 00:52:05.558770 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.558775 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.558786 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.558791 | orchestrator | 2025-11-23 00:52:05.558796 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-11-23 00:52:05.558801 | orchestrator | Sunday 23 November 2025 00:46:13 +0000 (0:00:00.424) 0:04:27.454 ******* 2025-11-23 00:52:05.558839 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f403ec7efadb50ba2371302c4a8526265ad84811'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-11-23 00:52:05.558846 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f403ec7efadb50ba2371302c4a8526265ad84811'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-11-23 00:52:05.558852 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f403ec7efadb50ba2371302c4a8526265ad84811'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-11-23 00:52:05.558858 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f403ec7efadb50ba2371302c4a8526265ad84811'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-11-23 00:52:05.558864 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f403ec7efadb50ba2371302c4a8526265ad84811'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-11-23 00:52:05.558870 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f403ec7efadb50ba2371302c4a8526265ad84811'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f403ec7efadb50ba2371302c4a8526265ad84811'}])  2025-11-23 00:52:05.558876 | orchestrator | 2025-11-23 00:52:05.558881 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-23 00:52:05.558885 | orchestrator | Sunday 23 November 2025 00:46:28 +0000 (0:00:15.097) 0:04:42.552 ******* 2025-11-23 00:52:05.558890 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.558895 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.558900 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.558904 | orchestrator | 2025-11-23 00:52:05.558909 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-23 00:52:05.558914 | orchestrator | Sunday 23 November 2025 00:46:28 +0000 (0:00:00.322) 0:04:42.874 ******* 2025-11-23 00:52:05.558919 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.558923 | orchestrator | 2025-11-23 00:52:05.558928 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-23 00:52:05.558933 | orchestrator | Sunday 23 November 2025 00:46:29 +0000 (0:00:00.660) 0:04:43.535 ******* 2025-11-23 00:52:05.558937 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.558945 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.558954 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.558959 | orchestrator | 2025-11-23 00:52:05.558964 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-23 00:52:05.558968 | orchestrator | Sunday 23 November 2025 00:46:29 +0000 (0:00:00.277) 0:04:43.812 ******* 2025-11-23 00:52:05.558973 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.558978 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.558983 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.558988 | orchestrator | 2025-11-23 00:52:05.558992 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-23 00:52:05.558997 | orchestrator | Sunday 23 November 2025 00:46:30 +0000 (0:00:00.283) 0:04:44.096 ******* 2025-11-23 00:52:05.559002 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-23 00:52:05.559007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-23 00:52:05.559011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-23 00:52:05.559016 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559021 | orchestrator | 2025-11-23 00:52:05.559026 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-23 00:52:05.559031 | orchestrator | Sunday 23 November 2025 00:46:30 +0000 (0:00:00.689) 0:04:44.785 ******* 2025-11-23 00:52:05.559035 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559040 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559045 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559050 | orchestrator | 2025-11-23 00:52:05.559054 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-11-23 00:52:05.559059 | orchestrator | 2025-11-23 00:52:05.559078 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-23 00:52:05.559084 | orchestrator | Sunday 23 November 2025 00:46:31 +0000 (0:00:00.632) 0:04:45.418 ******* 2025-11-23 00:52:05.559089 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.559094 | orchestrator | 2025-11-23 00:52:05.559099 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-23 00:52:05.559104 | orchestrator | Sunday 23 November 2025 00:46:31 +0000 (0:00:00.442) 0:04:45.860 ******* 2025-11-23 00:52:05.559108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.559113 | orchestrator | 2025-11-23 00:52:05.559118 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-23 00:52:05.559123 | orchestrator | Sunday 23 November 2025 00:46:32 +0000 (0:00:00.605) 0:04:46.466 ******* 2025-11-23 00:52:05.559128 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559133 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559138 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559142 | orchestrator | 2025-11-23 00:52:05.559147 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-23 00:52:05.559152 | orchestrator | Sunday 23 November 2025 00:46:33 +0000 (0:00:00.684) 0:04:47.150 ******* 2025-11-23 00:52:05.559157 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559162 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559167 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559172 | orchestrator | 2025-11-23 00:52:05.559177 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-23 00:52:05.559181 | orchestrator | Sunday 23 November 2025 00:46:33 +0000 (0:00:00.274) 0:04:47.425 ******* 2025-11-23 00:52:05.559186 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559191 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559196 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559201 | orchestrator | 2025-11-23 00:52:05.559206 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-23 00:52:05.559211 | orchestrator | Sunday 23 November 2025 00:46:34 +0000 (0:00:00.473) 0:04:47.898 ******* 2025-11-23 00:52:05.559220 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559225 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559230 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559234 | orchestrator | 2025-11-23 00:52:05.559239 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-23 00:52:05.559244 | orchestrator | Sunday 23 November 2025 00:46:34 +0000 (0:00:00.294) 0:04:48.193 ******* 2025-11-23 00:52:05.559249 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559254 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559259 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559264 | orchestrator | 2025-11-23 00:52:05.559269 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-23 00:52:05.559274 | orchestrator | Sunday 23 November 2025 00:46:34 +0000 (0:00:00.663) 0:04:48.857 ******* 2025-11-23 00:52:05.559279 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559283 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559288 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559293 | orchestrator | 2025-11-23 00:52:05.559298 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-23 00:52:05.559303 | orchestrator | Sunday 23 November 2025 00:46:35 +0000 (0:00:00.320) 0:04:49.177 ******* 2025-11-23 00:52:05.559308 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559313 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559317 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559322 | orchestrator | 2025-11-23 00:52:05.559327 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-23 00:52:05.559332 | orchestrator | Sunday 23 November 2025 00:46:35 +0000 (0:00:00.320) 0:04:49.498 ******* 2025-11-23 00:52:05.559337 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559342 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559346 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559351 | orchestrator | 2025-11-23 00:52:05.559356 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-23 00:52:05.559361 | orchestrator | Sunday 23 November 2025 00:46:36 +0000 (0:00:00.954) 0:04:50.453 ******* 2025-11-23 00:52:05.559365 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559370 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559375 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559399 | orchestrator | 2025-11-23 00:52:05.559406 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-23 00:52:05.559412 | orchestrator | Sunday 23 November 2025 00:46:37 +0000 (0:00:00.720) 0:04:51.173 ******* 2025-11-23 00:52:05.559417 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559422 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559427 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559432 | orchestrator | 2025-11-23 00:52:05.559437 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-23 00:52:05.559442 | orchestrator | Sunday 23 November 2025 00:46:37 +0000 (0:00:00.256) 0:04:51.430 ******* 2025-11-23 00:52:05.559446 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559451 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559457 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559461 | orchestrator | 2025-11-23 00:52:05.559467 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-23 00:52:05.559471 | orchestrator | Sunday 23 November 2025 00:46:37 +0000 (0:00:00.306) 0:04:51.736 ******* 2025-11-23 00:52:05.559476 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559481 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559486 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559491 | orchestrator | 2025-11-23 00:52:05.559496 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-23 00:52:05.559501 | orchestrator | Sunday 23 November 2025 00:46:38 +0000 (0:00:00.405) 0:04:52.141 ******* 2025-11-23 00:52:05.559506 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559511 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559535 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559541 | orchestrator | 2025-11-23 00:52:05.559545 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-23 00:52:05.559550 | orchestrator | Sunday 23 November 2025 00:46:38 +0000 (0:00:00.264) 0:04:52.406 ******* 2025-11-23 00:52:05.559555 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559560 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559565 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559569 | orchestrator | 2025-11-23 00:52:05.559574 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-23 00:52:05.559579 | orchestrator | Sunday 23 November 2025 00:46:38 +0000 (0:00:00.270) 0:04:52.677 ******* 2025-11-23 00:52:05.559584 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559588 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559593 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559598 | orchestrator | 2025-11-23 00:52:05.559603 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-23 00:52:05.559608 | orchestrator | Sunday 23 November 2025 00:46:39 +0000 (0:00:00.264) 0:04:52.941 ******* 2025-11-23 00:52:05.559612 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559617 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559622 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559627 | orchestrator | 2025-11-23 00:52:05.559631 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-23 00:52:05.559636 | orchestrator | Sunday 23 November 2025 00:46:39 +0000 (0:00:00.412) 0:04:53.354 ******* 2025-11-23 00:52:05.559641 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559646 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559651 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559655 | orchestrator | 2025-11-23 00:52:05.559660 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-23 00:52:05.559665 | orchestrator | Sunday 23 November 2025 00:46:39 +0000 (0:00:00.282) 0:04:53.636 ******* 2025-11-23 00:52:05.559670 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559675 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559680 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559684 | orchestrator | 2025-11-23 00:52:05.559689 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-23 00:52:05.559694 | orchestrator | Sunday 23 November 2025 00:46:40 +0000 (0:00:00.279) 0:04:53.916 ******* 2025-11-23 00:52:05.559699 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559703 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559708 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559713 | orchestrator | 2025-11-23 00:52:05.559718 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-11-23 00:52:05.559723 | orchestrator | Sunday 23 November 2025 00:46:40 +0000 (0:00:00.594) 0:04:54.510 ******* 2025-11-23 00:52:05.559728 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-23 00:52:05.559733 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:52:05.559738 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:52:05.559742 | orchestrator | 2025-11-23 00:52:05.559747 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-11-23 00:52:05.559752 | orchestrator | Sunday 23 November 2025 00:46:41 +0000 (0:00:00.559) 0:04:55.069 ******* 2025-11-23 00:52:05.559757 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.559762 | orchestrator | 2025-11-23 00:52:05.559766 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-11-23 00:52:05.559771 | orchestrator | Sunday 23 November 2025 00:46:41 +0000 (0:00:00.456) 0:04:55.526 ******* 2025-11-23 00:52:05.559776 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.559781 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.559791 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.559796 | orchestrator | 2025-11-23 00:52:05.559801 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-11-23 00:52:05.559805 | orchestrator | Sunday 23 November 2025 00:46:42 +0000 (0:00:00.689) 0:04:56.216 ******* 2025-11-23 00:52:05.559810 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.559815 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.559820 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.559825 | orchestrator | 2025-11-23 00:52:05.559829 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-11-23 00:52:05.559834 | orchestrator | Sunday 23 November 2025 00:46:42 +0000 (0:00:00.456) 0:04:56.672 ******* 2025-11-23 00:52:05.559842 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 00:52:05.559847 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 00:52:05.559852 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 00:52:05.559856 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-11-23 00:52:05.559861 | orchestrator | 2025-11-23 00:52:05.559866 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-11-23 00:52:05.559871 | orchestrator | Sunday 23 November 2025 00:46:53 +0000 (0:00:10.429) 0:05:07.102 ******* 2025-11-23 00:52:05.559876 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.559881 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.559885 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.559890 | orchestrator | 2025-11-23 00:52:05.559895 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-11-23 00:52:05.559900 | orchestrator | Sunday 23 November 2025 00:46:53 +0000 (0:00:00.448) 0:05:07.550 ******* 2025-11-23 00:52:05.559905 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-23 00:52:05.559909 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-23 00:52:05.559914 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-23 00:52:05.559919 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-23 00:52:05.559924 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.559929 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.559933 | orchestrator | 2025-11-23 00:52:05.559952 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-11-23 00:52:05.559957 | orchestrator | Sunday 23 November 2025 00:46:56 +0000 (0:00:02.388) 0:05:09.939 ******* 2025-11-23 00:52:05.559962 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-23 00:52:05.559967 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-23 00:52:05.559972 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-23 00:52:05.559977 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 00:52:05.559981 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-23 00:52:05.559986 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-23 00:52:05.559991 | orchestrator | 2025-11-23 00:52:05.559996 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-11-23 00:52:05.560001 | orchestrator | Sunday 23 November 2025 00:46:57 +0000 (0:00:01.194) 0:05:11.134 ******* 2025-11-23 00:52:05.560005 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.560010 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.560015 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.560020 | orchestrator | 2025-11-23 00:52:05.560025 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-11-23 00:52:05.560030 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:00.961) 0:05:12.095 ******* 2025-11-23 00:52:05.560035 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.560039 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.560044 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.560049 | orchestrator | 2025-11-23 00:52:05.560054 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-11-23 00:52:05.560062 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:00.284) 0:05:12.380 ******* 2025-11-23 00:52:05.560067 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.560072 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.560077 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.560082 | orchestrator | 2025-11-23 00:52:05.560086 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-11-23 00:52:05.560091 | orchestrator | Sunday 23 November 2025 00:46:58 +0000 (0:00:00.278) 0:05:12.659 ******* 2025-11-23 00:52:05.560096 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.560101 | orchestrator | 2025-11-23 00:52:05.560106 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-11-23 00:52:05.560110 | orchestrator | Sunday 23 November 2025 00:46:59 +0000 (0:00:00.589) 0:05:13.249 ******* 2025-11-23 00:52:05.560115 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.560120 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.560125 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.560130 | orchestrator | 2025-11-23 00:52:05.560134 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-11-23 00:52:05.560139 | orchestrator | Sunday 23 November 2025 00:46:59 +0000 (0:00:00.293) 0:05:13.543 ******* 2025-11-23 00:52:05.560144 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.560149 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.560153 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.560158 | orchestrator | 2025-11-23 00:52:05.560163 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-11-23 00:52:05.560168 | orchestrator | Sunday 23 November 2025 00:46:59 +0000 (0:00:00.282) 0:05:13.825 ******* 2025-11-23 00:52:05.560172 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.560177 | orchestrator | 2025-11-23 00:52:05.560182 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-11-23 00:52:05.560187 | orchestrator | Sunday 23 November 2025 00:47:00 +0000 (0:00:00.618) 0:05:14.443 ******* 2025-11-23 00:52:05.560192 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.560197 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.560201 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.560206 | orchestrator | 2025-11-23 00:52:05.560211 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-11-23 00:52:05.560216 | orchestrator | Sunday 23 November 2025 00:47:01 +0000 (0:00:01.165) 0:05:15.609 ******* 2025-11-23 00:52:05.560220 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.560225 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.560230 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.560235 | orchestrator | 2025-11-23 00:52:05.560239 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-11-23 00:52:05.560244 | orchestrator | Sunday 23 November 2025 00:47:02 +0000 (0:00:01.103) 0:05:16.713 ******* 2025-11-23 00:52:05.560252 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.560257 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.560261 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.560266 | orchestrator | 2025-11-23 00:52:05.560271 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-11-23 00:52:05.560276 | orchestrator | Sunday 23 November 2025 00:47:04 +0000 (0:00:01.834) 0:05:18.547 ******* 2025-11-23 00:52:05.560281 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.560285 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.560290 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.560295 | orchestrator | 2025-11-23 00:52:05.560300 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-11-23 00:52:05.560304 | orchestrator | Sunday 23 November 2025 00:47:06 +0000 (0:00:02.054) 0:05:20.602 ******* 2025-11-23 00:52:05.560309 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.560318 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.560322 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-11-23 00:52:05.560327 | orchestrator | 2025-11-23 00:52:05.560332 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-11-23 00:52:05.560337 | orchestrator | Sunday 23 November 2025 00:47:07 +0000 (0:00:00.566) 0:05:21.169 ******* 2025-11-23 00:52:05.560342 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-11-23 00:52:05.560360 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-11-23 00:52:05.560366 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-11-23 00:52:05.560371 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-11-23 00:52:05.560375 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-11-23 00:52:05.560393 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.560398 | orchestrator | 2025-11-23 00:52:05.560402 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-11-23 00:52:05.560407 | orchestrator | Sunday 23 November 2025 00:47:37 +0000 (0:00:30.206) 0:05:51.375 ******* 2025-11-23 00:52:05.560412 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.560417 | orchestrator | 2025-11-23 00:52:05.560422 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-11-23 00:52:05.560427 | orchestrator | Sunday 23 November 2025 00:47:38 +0000 (0:00:01.262) 0:05:52.638 ******* 2025-11-23 00:52:05.560432 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.560437 | orchestrator | 2025-11-23 00:52:05.560442 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-11-23 00:52:05.560447 | orchestrator | Sunday 23 November 2025 00:47:39 +0000 (0:00:00.300) 0:05:52.939 ******* 2025-11-23 00:52:05.560451 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.560456 | orchestrator | 2025-11-23 00:52:05.560461 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-11-23 00:52:05.560466 | orchestrator | Sunday 23 November 2025 00:47:39 +0000 (0:00:00.118) 0:05:53.057 ******* 2025-11-23 00:52:05.560471 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-11-23 00:52:05.560476 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-11-23 00:52:05.560481 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-11-23 00:52:05.560486 | orchestrator | 2025-11-23 00:52:05.560490 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-11-23 00:52:05.560495 | orchestrator | Sunday 23 November 2025 00:47:45 +0000 (0:00:06.488) 0:05:59.546 ******* 2025-11-23 00:52:05.560500 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-11-23 00:52:05.560505 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-11-23 00:52:05.560510 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-11-23 00:52:05.560515 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-11-23 00:52:05.560520 | orchestrator | 2025-11-23 00:52:05.560525 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-23 00:52:05.560529 | orchestrator | Sunday 23 November 2025 00:47:50 +0000 (0:00:05.043) 0:06:04.590 ******* 2025-11-23 00:52:05.560534 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.560539 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.560544 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.560549 | orchestrator | 2025-11-23 00:52:05.560554 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-23 00:52:05.560559 | orchestrator | Sunday 23 November 2025 00:47:51 +0000 (0:00:00.631) 0:06:05.221 ******* 2025-11-23 00:52:05.560567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.560572 | orchestrator | 2025-11-23 00:52:05.560577 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-23 00:52:05.560582 | orchestrator | Sunday 23 November 2025 00:47:51 +0000 (0:00:00.463) 0:06:05.685 ******* 2025-11-23 00:52:05.560587 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.560592 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.560597 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.560602 | orchestrator | 2025-11-23 00:52:05.560607 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-23 00:52:05.560612 | orchestrator | Sunday 23 November 2025 00:47:52 +0000 (0:00:00.452) 0:06:06.137 ******* 2025-11-23 00:52:05.560616 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.560621 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.560626 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.560631 | orchestrator | 2025-11-23 00:52:05.560636 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-23 00:52:05.560641 | orchestrator | Sunday 23 November 2025 00:47:53 +0000 (0:00:01.187) 0:06:07.324 ******* 2025-11-23 00:52:05.560646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-23 00:52:05.560651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-23 00:52:05.560656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-23 00:52:05.560661 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.560666 | orchestrator | 2025-11-23 00:52:05.560673 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-23 00:52:05.560681 | orchestrator | Sunday 23 November 2025 00:47:53 +0000 (0:00:00.531) 0:06:07.856 ******* 2025-11-23 00:52:05.560688 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.560696 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.560703 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.560710 | orchestrator | 2025-11-23 00:52:05.560717 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-11-23 00:52:05.560725 | orchestrator | 2025-11-23 00:52:05.560732 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-23 00:52:05.560740 | orchestrator | Sunday 23 November 2025 00:47:54 +0000 (0:00:00.643) 0:06:08.499 ******* 2025-11-23 00:52:05.560748 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.560757 | orchestrator | 2025-11-23 00:52:05.560780 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-23 00:52:05.560785 | orchestrator | Sunday 23 November 2025 00:47:55 +0000 (0:00:00.448) 0:06:08.947 ******* 2025-11-23 00:52:05.560790 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.560795 | orchestrator | 2025-11-23 00:52:05.560800 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-23 00:52:05.560804 | orchestrator | Sunday 23 November 2025 00:47:55 +0000 (0:00:00.643) 0:06:09.591 ******* 2025-11-23 00:52:05.560809 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.560814 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.560819 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.560823 | orchestrator | 2025-11-23 00:52:05.560828 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-23 00:52:05.560833 | orchestrator | Sunday 23 November 2025 00:47:56 +0000 (0:00:00.323) 0:06:09.915 ******* 2025-11-23 00:52:05.560838 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.560843 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.560847 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.560852 | orchestrator | 2025-11-23 00:52:05.560857 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-23 00:52:05.560866 | orchestrator | Sunday 23 November 2025 00:47:56 +0000 (0:00:00.660) 0:06:10.576 ******* 2025-11-23 00:52:05.560871 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.560875 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.560880 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.560885 | orchestrator | 2025-11-23 00:52:05.560889 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-23 00:52:05.560894 | orchestrator | Sunday 23 November 2025 00:47:57 +0000 (0:00:00.617) 0:06:11.193 ******* 2025-11-23 00:52:05.560899 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.560904 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.560908 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.560913 | orchestrator | 2025-11-23 00:52:05.560918 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-23 00:52:05.560923 | orchestrator | Sunday 23 November 2025 00:47:58 +0000 (0:00:00.761) 0:06:11.955 ******* 2025-11-23 00:52:05.560928 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.560932 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.560937 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.560942 | orchestrator | 2025-11-23 00:52:05.560947 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-23 00:52:05.560951 | orchestrator | Sunday 23 November 2025 00:47:58 +0000 (0:00:00.293) 0:06:12.248 ******* 2025-11-23 00:52:05.560956 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.560961 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.560966 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.560970 | orchestrator | 2025-11-23 00:52:05.560975 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-23 00:52:05.560980 | orchestrator | Sunday 23 November 2025 00:47:58 +0000 (0:00:00.281) 0:06:12.530 ******* 2025-11-23 00:52:05.560985 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.560989 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.560994 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.560999 | orchestrator | 2025-11-23 00:52:05.561004 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-23 00:52:05.561008 | orchestrator | Sunday 23 November 2025 00:47:58 +0000 (0:00:00.258) 0:06:12.788 ******* 2025-11-23 00:52:05.561013 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561018 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561023 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561027 | orchestrator | 2025-11-23 00:52:05.561075 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-23 00:52:05.561084 | orchestrator | Sunday 23 November 2025 00:47:59 +0000 (0:00:00.622) 0:06:13.410 ******* 2025-11-23 00:52:05.561089 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561093 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561098 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561103 | orchestrator | 2025-11-23 00:52:05.561108 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-23 00:52:05.561112 | orchestrator | Sunday 23 November 2025 00:48:00 +0000 (0:00:00.844) 0:06:14.255 ******* 2025-11-23 00:52:05.561117 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561122 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561127 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561131 | orchestrator | 2025-11-23 00:52:05.561136 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-23 00:52:05.561143 | orchestrator | Sunday 23 November 2025 00:48:00 +0000 (0:00:00.280) 0:06:14.536 ******* 2025-11-23 00:52:05.561148 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561153 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561158 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561162 | orchestrator | 2025-11-23 00:52:05.561167 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-23 00:52:05.561172 | orchestrator | Sunday 23 November 2025 00:48:00 +0000 (0:00:00.289) 0:06:14.826 ******* 2025-11-23 00:52:05.561180 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561185 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561190 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561194 | orchestrator | 2025-11-23 00:52:05.561199 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-23 00:52:05.561204 | orchestrator | Sunday 23 November 2025 00:48:01 +0000 (0:00:00.299) 0:06:15.126 ******* 2025-11-23 00:52:05.561209 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561213 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561218 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561223 | orchestrator | 2025-11-23 00:52:05.561228 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-23 00:52:05.561233 | orchestrator | Sunday 23 November 2025 00:48:01 +0000 (0:00:00.480) 0:06:15.606 ******* 2025-11-23 00:52:05.561237 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561242 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561247 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561252 | orchestrator | 2025-11-23 00:52:05.561257 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-23 00:52:05.561265 | orchestrator | Sunday 23 November 2025 00:48:01 +0000 (0:00:00.282) 0:06:15.889 ******* 2025-11-23 00:52:05.561270 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561275 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561279 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561284 | orchestrator | 2025-11-23 00:52:05.561289 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-23 00:52:05.561294 | orchestrator | Sunday 23 November 2025 00:48:02 +0000 (0:00:00.289) 0:06:16.179 ******* 2025-11-23 00:52:05.561298 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561303 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561308 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561312 | orchestrator | 2025-11-23 00:52:05.561317 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-23 00:52:05.561322 | orchestrator | Sunday 23 November 2025 00:48:02 +0000 (0:00:00.259) 0:06:16.439 ******* 2025-11-23 00:52:05.561327 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561331 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561336 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561341 | orchestrator | 2025-11-23 00:52:05.561345 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-23 00:52:05.561350 | orchestrator | Sunday 23 November 2025 00:48:02 +0000 (0:00:00.444) 0:06:16.884 ******* 2025-11-23 00:52:05.561355 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561359 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561364 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561369 | orchestrator | 2025-11-23 00:52:05.561374 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-23 00:52:05.561399 | orchestrator | Sunday 23 November 2025 00:48:03 +0000 (0:00:00.300) 0:06:17.184 ******* 2025-11-23 00:52:05.561404 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561409 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561414 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561418 | orchestrator | 2025-11-23 00:52:05.561423 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-11-23 00:52:05.561428 | orchestrator | Sunday 23 November 2025 00:48:03 +0000 (0:00:00.465) 0:06:17.649 ******* 2025-11-23 00:52:05.561433 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561437 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561442 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561447 | orchestrator | 2025-11-23 00:52:05.561451 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-11-23 00:52:05.561456 | orchestrator | Sunday 23 November 2025 00:48:04 +0000 (0:00:00.421) 0:06:18.071 ******* 2025-11-23 00:52:05.561461 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:52:05.561471 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:52:05.561476 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:52:05.561481 | orchestrator | 2025-11-23 00:52:05.561485 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-11-23 00:52:05.561490 | orchestrator | Sunday 23 November 2025 00:48:04 +0000 (0:00:00.584) 0:06:18.656 ******* 2025-11-23 00:52:05.561495 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.561500 | orchestrator | 2025-11-23 00:52:05.561505 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-11-23 00:52:05.561509 | orchestrator | Sunday 23 November 2025 00:48:05 +0000 (0:00:00.448) 0:06:19.104 ******* 2025-11-23 00:52:05.561514 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561519 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561523 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561528 | orchestrator | 2025-11-23 00:52:05.561533 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-11-23 00:52:05.561537 | orchestrator | Sunday 23 November 2025 00:48:05 +0000 (0:00:00.448) 0:06:19.553 ******* 2025-11-23 00:52:05.561542 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561547 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561552 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561556 | orchestrator | 2025-11-23 00:52:05.561561 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-11-23 00:52:05.561566 | orchestrator | Sunday 23 November 2025 00:48:05 +0000 (0:00:00.279) 0:06:19.833 ******* 2025-11-23 00:52:05.561573 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561580 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561588 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561596 | orchestrator | 2025-11-23 00:52:05.561601 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-11-23 00:52:05.561606 | orchestrator | Sunday 23 November 2025 00:48:06 +0000 (0:00:00.548) 0:06:20.381 ******* 2025-11-23 00:52:05.561611 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.561615 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.561620 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.561625 | orchestrator | 2025-11-23 00:52:05.561630 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-11-23 00:52:05.561635 | orchestrator | Sunday 23 November 2025 00:48:06 +0000 (0:00:00.293) 0:06:20.674 ******* 2025-11-23 00:52:05.561639 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-23 00:52:05.561645 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-23 00:52:05.561649 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-23 00:52:05.561654 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-23 00:52:05.561659 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-23 00:52:05.561664 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-23 00:52:05.561674 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-23 00:52:05.561679 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-23 00:52:05.561683 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-23 00:52:05.561688 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-23 00:52:05.561735 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-23 00:52:05.561741 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-23 00:52:05.561750 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-23 00:52:05.561755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-23 00:52:05.561759 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-23 00:52:05.561764 | orchestrator | 2025-11-23 00:52:05.561769 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-11-23 00:52:05.561774 | orchestrator | Sunday 23 November 2025 00:48:09 +0000 (0:00:03.022) 0:06:23.697 ******* 2025-11-23 00:52:05.561779 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.561784 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.561788 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.561793 | orchestrator | 2025-11-23 00:52:05.561798 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-11-23 00:52:05.561803 | orchestrator | Sunday 23 November 2025 00:48:10 +0000 (0:00:00.265) 0:06:23.962 ******* 2025-11-23 00:52:05.561808 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.561813 | orchestrator | 2025-11-23 00:52:05.561817 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-11-23 00:52:05.561822 | orchestrator | Sunday 23 November 2025 00:48:10 +0000 (0:00:00.456) 0:06:24.419 ******* 2025-11-23 00:52:05.561827 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-23 00:52:05.561832 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-23 00:52:05.561837 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-23 00:52:05.561841 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-11-23 00:52:05.561846 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-11-23 00:52:05.561851 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-11-23 00:52:05.561856 | orchestrator | 2025-11-23 00:52:05.561861 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-11-23 00:52:05.561866 | orchestrator | Sunday 23 November 2025 00:48:11 +0000 (0:00:01.157) 0:06:25.577 ******* 2025-11-23 00:52:05.561870 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.561875 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-23 00:52:05.561880 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-23 00:52:05.561885 | orchestrator | 2025-11-23 00:52:05.561890 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-11-23 00:52:05.561895 | orchestrator | Sunday 23 November 2025 00:48:13 +0000 (0:00:02.081) 0:06:27.659 ******* 2025-11-23 00:52:05.561899 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-23 00:52:05.561904 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-23 00:52:05.561909 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.561914 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-23 00:52:05.561919 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-23 00:52:05.561924 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.561928 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-23 00:52:05.561933 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-23 00:52:05.561938 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.561943 | orchestrator | 2025-11-23 00:52:05.561947 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-11-23 00:52:05.561952 | orchestrator | Sunday 23 November 2025 00:48:14 +0000 (0:00:01.183) 0:06:28.842 ******* 2025-11-23 00:52:05.561960 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.561965 | orchestrator | 2025-11-23 00:52:05.561970 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-11-23 00:52:05.561974 | orchestrator | Sunday 23 November 2025 00:48:17 +0000 (0:00:02.103) 0:06:30.945 ******* 2025-11-23 00:52:05.561983 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.561988 | orchestrator | 2025-11-23 00:52:05.561992 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-11-23 00:52:05.561997 | orchestrator | Sunday 23 November 2025 00:48:17 +0000 (0:00:00.497) 0:06:31.443 ******* 2025-11-23 00:52:05.562002 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e77b7216-a915-581b-8f3c-a7fc3e50862f', 'data_vg': 'ceph-e77b7216-a915-581b-8f3c-a7fc3e50862f'}) 2025-11-23 00:52:05.562008 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b63f9958-8ac2-53b3-b8b4-a449f25b1af6', 'data_vg': 'ceph-b63f9958-8ac2-53b3-b8b4-a449f25b1af6'}) 2025-11-23 00:52:05.562047 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c884493c-7b6c-5149-8c24-d999b26a8d07', 'data_vg': 'ceph-c884493c-7b6c-5149-8c24-d999b26a8d07'}) 2025-11-23 00:52:05.562058 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-939e3465-cd43-5a63-a3e3-1280596736df', 'data_vg': 'ceph-939e3465-cd43-5a63-a3e3-1280596736df'}) 2025-11-23 00:52:05.562063 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-889c1fef-e00e-5a44-b704-8d22cfa7cd7a', 'data_vg': 'ceph-889c1fef-e00e-5a44-b704-8d22cfa7cd7a'}) 2025-11-23 00:52:05.562068 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1076031f-9245-50d5-902f-2c37ef490a74', 'data_vg': 'ceph-1076031f-9245-50d5-902f-2c37ef490a74'}) 2025-11-23 00:52:05.562073 | orchestrator | 2025-11-23 00:52:05.562078 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-11-23 00:52:05.562083 | orchestrator | Sunday 23 November 2025 00:48:57 +0000 (0:00:39.714) 0:07:11.157 ******* 2025-11-23 00:52:05.562088 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562093 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562098 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.562103 | orchestrator | 2025-11-23 00:52:05.562107 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-11-23 00:52:05.562112 | orchestrator | Sunday 23 November 2025 00:48:57 +0000 (0:00:00.267) 0:07:11.425 ******* 2025-11-23 00:52:05.562117 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.562122 | orchestrator | 2025-11-23 00:52:05.562127 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-11-23 00:52:05.562132 | orchestrator | Sunday 23 November 2025 00:48:58 +0000 (0:00:00.486) 0:07:11.911 ******* 2025-11-23 00:52:05.562137 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.562142 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.562146 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.562151 | orchestrator | 2025-11-23 00:52:05.562156 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-11-23 00:52:05.562161 | orchestrator | Sunday 23 November 2025 00:48:58 +0000 (0:00:00.776) 0:07:12.687 ******* 2025-11-23 00:52:05.562166 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.562171 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.562176 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.562180 | orchestrator | 2025-11-23 00:52:05.562185 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-11-23 00:52:05.562190 | orchestrator | Sunday 23 November 2025 00:49:01 +0000 (0:00:02.540) 0:07:15.228 ******* 2025-11-23 00:52:05.562195 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.562200 | orchestrator | 2025-11-23 00:52:05.562205 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-11-23 00:52:05.562210 | orchestrator | Sunday 23 November 2025 00:49:01 +0000 (0:00:00.458) 0:07:15.686 ******* 2025-11-23 00:52:05.562214 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.562219 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.562228 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.562233 | orchestrator | 2025-11-23 00:52:05.562238 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-11-23 00:52:05.562242 | orchestrator | Sunday 23 November 2025 00:49:03 +0000 (0:00:01.278) 0:07:16.965 ******* 2025-11-23 00:52:05.562247 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.562252 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.562257 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.562262 | orchestrator | 2025-11-23 00:52:05.562267 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-11-23 00:52:05.562271 | orchestrator | Sunday 23 November 2025 00:49:04 +0000 (0:00:01.086) 0:07:18.051 ******* 2025-11-23 00:52:05.562276 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.562281 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.562286 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.562291 | orchestrator | 2025-11-23 00:52:05.562296 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-11-23 00:52:05.562300 | orchestrator | Sunday 23 November 2025 00:49:05 +0000 (0:00:01.720) 0:07:19.772 ******* 2025-11-23 00:52:05.562305 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562310 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562315 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.562321 | orchestrator | 2025-11-23 00:52:05.562329 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-11-23 00:52:05.562335 | orchestrator | Sunday 23 November 2025 00:49:06 +0000 (0:00:00.299) 0:07:20.072 ******* 2025-11-23 00:52:05.562343 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562348 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562353 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.562358 | orchestrator | 2025-11-23 00:52:05.562362 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-11-23 00:52:05.562367 | orchestrator | Sunday 23 November 2025 00:49:06 +0000 (0:00:00.511) 0:07:20.584 ******* 2025-11-23 00:52:05.562372 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-23 00:52:05.562414 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-11-23 00:52:05.562424 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-11-23 00:52:05.562429 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-11-23 00:52:05.562434 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-11-23 00:52:05.562438 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-11-23 00:52:05.562443 | orchestrator | 2025-11-23 00:52:05.562448 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-11-23 00:52:05.562453 | orchestrator | Sunday 23 November 2025 00:49:07 +0000 (0:00:01.063) 0:07:21.647 ******* 2025-11-23 00:52:05.562457 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-23 00:52:05.562462 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-11-23 00:52:05.562468 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-11-23 00:52:05.562477 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-23 00:52:05.562482 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-23 00:52:05.562487 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-23 00:52:05.562491 | orchestrator | 2025-11-23 00:52:05.562500 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-11-23 00:52:05.562505 | orchestrator | Sunday 23 November 2025 00:49:09 +0000 (0:00:02.055) 0:07:23.703 ******* 2025-11-23 00:52:05.562510 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-23 00:52:05.562514 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-11-23 00:52:05.562519 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-11-23 00:52:05.562524 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-23 00:52:05.562528 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-23 00:52:05.562558 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-23 00:52:05.562566 | orchestrator | 2025-11-23 00:52:05.562574 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-11-23 00:52:05.562584 | orchestrator | Sunday 23 November 2025 00:49:13 +0000 (0:00:03.648) 0:07:27.351 ******* 2025-11-23 00:52:05.562589 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562593 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562598 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.562603 | orchestrator | 2025-11-23 00:52:05.562608 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-11-23 00:52:05.562613 | orchestrator | Sunday 23 November 2025 00:49:17 +0000 (0:00:03.606) 0:07:30.957 ******* 2025-11-23 00:52:05.562618 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562623 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562628 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-11-23 00:52:05.562632 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.562637 | orchestrator | 2025-11-23 00:52:05.562642 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-11-23 00:52:05.562647 | orchestrator | Sunday 23 November 2025 00:49:29 +0000 (0:00:12.233) 0:07:43.191 ******* 2025-11-23 00:52:05.562652 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562657 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562662 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.562666 | orchestrator | 2025-11-23 00:52:05.562671 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-23 00:52:05.562676 | orchestrator | Sunday 23 November 2025 00:49:30 +0000 (0:00:00.883) 0:07:44.075 ******* 2025-11-23 00:52:05.562681 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562686 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562691 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.562695 | orchestrator | 2025-11-23 00:52:05.562700 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-23 00:52:05.562705 | orchestrator | Sunday 23 November 2025 00:49:30 +0000 (0:00:00.300) 0:07:44.375 ******* 2025-11-23 00:52:05.562710 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.562715 | orchestrator | 2025-11-23 00:52:05.562720 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-23 00:52:05.562724 | orchestrator | Sunday 23 November 2025 00:49:30 +0000 (0:00:00.462) 0:07:44.838 ******* 2025-11-23 00:52:05.562729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.562734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.562739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.562744 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562749 | orchestrator | 2025-11-23 00:52:05.562754 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-23 00:52:05.562758 | orchestrator | Sunday 23 November 2025 00:49:31 +0000 (0:00:00.666) 0:07:45.504 ******* 2025-11-23 00:52:05.562763 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562768 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562773 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.562778 | orchestrator | 2025-11-23 00:52:05.562783 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-23 00:52:05.562788 | orchestrator | Sunday 23 November 2025 00:49:31 +0000 (0:00:00.277) 0:07:45.781 ******* 2025-11-23 00:52:05.562792 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562797 | orchestrator | 2025-11-23 00:52:05.562802 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-23 00:52:05.562807 | orchestrator | Sunday 23 November 2025 00:49:32 +0000 (0:00:00.196) 0:07:45.978 ******* 2025-11-23 00:52:05.562812 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562820 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.562825 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.562833 | orchestrator | 2025-11-23 00:52:05.562838 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-23 00:52:05.562843 | orchestrator | Sunday 23 November 2025 00:49:32 +0000 (0:00:00.274) 0:07:46.252 ******* 2025-11-23 00:52:05.562848 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562853 | orchestrator | 2025-11-23 00:52:05.562857 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-23 00:52:05.562862 | orchestrator | Sunday 23 November 2025 00:49:32 +0000 (0:00:00.202) 0:07:46.455 ******* 2025-11-23 00:52:05.562867 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562872 | orchestrator | 2025-11-23 00:52:05.562877 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-23 00:52:05.562881 | orchestrator | Sunday 23 November 2025 00:49:32 +0000 (0:00:00.196) 0:07:46.652 ******* 2025-11-23 00:52:05.562908 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562915 | orchestrator | 2025-11-23 00:52:05.562919 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-23 00:52:05.562924 | orchestrator | Sunday 23 November 2025 00:49:32 +0000 (0:00:00.109) 0:07:46.761 ******* 2025-11-23 00:52:05.562929 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562934 | orchestrator | 2025-11-23 00:52:05.562938 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-23 00:52:05.562943 | orchestrator | Sunday 23 November 2025 00:49:33 +0000 (0:00:00.195) 0:07:46.957 ******* 2025-11-23 00:52:05.562951 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562956 | orchestrator | 2025-11-23 00:52:05.562961 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-23 00:52:05.562966 | orchestrator | Sunday 23 November 2025 00:49:33 +0000 (0:00:00.183) 0:07:47.140 ******* 2025-11-23 00:52:05.562970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.562975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.562980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.562985 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.562989 | orchestrator | 2025-11-23 00:52:05.562994 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-23 00:52:05.562999 | orchestrator | Sunday 23 November 2025 00:49:33 +0000 (0:00:00.754) 0:07:47.895 ******* 2025-11-23 00:52:05.563004 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563008 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563013 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563018 | orchestrator | 2025-11-23 00:52:05.563023 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-23 00:52:05.563027 | orchestrator | Sunday 23 November 2025 00:49:34 +0000 (0:00:00.284) 0:07:48.179 ******* 2025-11-23 00:52:05.563032 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563040 | orchestrator | 2025-11-23 00:52:05.563047 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-23 00:52:05.563052 | orchestrator | Sunday 23 November 2025 00:49:34 +0000 (0:00:00.206) 0:07:48.385 ******* 2025-11-23 00:52:05.563057 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563061 | orchestrator | 2025-11-23 00:52:05.563066 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-11-23 00:52:05.563071 | orchestrator | 2025-11-23 00:52:05.563076 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-23 00:52:05.563080 | orchestrator | Sunday 23 November 2025 00:49:35 +0000 (0:00:00.581) 0:07:48.967 ******* 2025-11-23 00:52:05.563085 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.563091 | orchestrator | 2025-11-23 00:52:05.563095 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-23 00:52:05.563100 | orchestrator | Sunday 23 November 2025 00:49:36 +0000 (0:00:01.153) 0:07:50.120 ******* 2025-11-23 00:52:05.563109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.563114 | orchestrator | 2025-11-23 00:52:05.563119 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-23 00:52:05.563123 | orchestrator | Sunday 23 November 2025 00:49:37 +0000 (0:00:01.124) 0:07:51.245 ******* 2025-11-23 00:52:05.563128 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563133 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563138 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563143 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.563148 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.563152 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.563157 | orchestrator | 2025-11-23 00:52:05.563162 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-23 00:52:05.563167 | orchestrator | Sunday 23 November 2025 00:49:38 +0000 (0:00:01.032) 0:07:52.277 ******* 2025-11-23 00:52:05.563172 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563176 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563181 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563186 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563191 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563195 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563200 | orchestrator | 2025-11-23 00:52:05.563205 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-23 00:52:05.563210 | orchestrator | Sunday 23 November 2025 00:49:39 +0000 (0:00:00.700) 0:07:52.978 ******* 2025-11-23 00:52:05.563214 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563219 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563224 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563229 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563234 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563239 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563243 | orchestrator | 2025-11-23 00:52:05.563256 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-23 00:52:05.563263 | orchestrator | Sunday 23 November 2025 00:49:39 +0000 (0:00:00.885) 0:07:53.864 ******* 2025-11-23 00:52:05.563271 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563279 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563286 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563293 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563299 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563306 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563312 | orchestrator | 2025-11-23 00:52:05.563319 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-23 00:52:05.563327 | orchestrator | Sunday 23 November 2025 00:49:40 +0000 (0:00:00.650) 0:07:54.514 ******* 2025-11-23 00:52:05.563334 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563341 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563349 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563357 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.563365 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.563371 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.563376 | orchestrator | 2025-11-23 00:52:05.563411 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-23 00:52:05.563416 | orchestrator | Sunday 23 November 2025 00:49:41 +0000 (0:00:01.072) 0:07:55.587 ******* 2025-11-23 00:52:05.563420 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563425 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563430 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563435 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563440 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563448 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563453 | orchestrator | 2025-11-23 00:52:05.563462 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-23 00:52:05.563467 | orchestrator | Sunday 23 November 2025 00:49:42 +0000 (0:00:00.533) 0:07:56.121 ******* 2025-11-23 00:52:05.563473 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563478 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563483 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563489 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563494 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563499 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563505 | orchestrator | 2025-11-23 00:52:05.563510 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-23 00:52:05.563516 | orchestrator | Sunday 23 November 2025 00:49:42 +0000 (0:00:00.653) 0:07:56.774 ******* 2025-11-23 00:52:05.563525 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563532 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563538 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.563543 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.563550 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.563559 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563565 | orchestrator | 2025-11-23 00:52:05.563571 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-23 00:52:05.563576 | orchestrator | Sunday 23 November 2025 00:49:43 +0000 (0:00:01.058) 0:07:57.833 ******* 2025-11-23 00:52:05.563581 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563587 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563597 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563603 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.563608 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.563614 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.563619 | orchestrator | 2025-11-23 00:52:05.563625 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-23 00:52:05.563634 | orchestrator | Sunday 23 November 2025 00:49:45 +0000 (0:00:01.258) 0:07:59.091 ******* 2025-11-23 00:52:05.563640 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563646 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563651 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563656 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563662 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563667 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563672 | orchestrator | 2025-11-23 00:52:05.563678 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-23 00:52:05.563683 | orchestrator | Sunday 23 November 2025 00:49:45 +0000 (0:00:00.532) 0:07:59.624 ******* 2025-11-23 00:52:05.563688 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563694 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563699 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563704 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.563709 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.563715 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.563720 | orchestrator | 2025-11-23 00:52:05.563725 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-23 00:52:05.563731 | orchestrator | Sunday 23 November 2025 00:49:46 +0000 (0:00:00.735) 0:08:00.360 ******* 2025-11-23 00:52:05.563736 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563741 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563747 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563752 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563757 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563763 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563768 | orchestrator | 2025-11-23 00:52:05.563773 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-23 00:52:05.563779 | orchestrator | Sunday 23 November 2025 00:49:47 +0000 (0:00:00.545) 0:08:00.905 ******* 2025-11-23 00:52:05.563784 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563793 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563799 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563804 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563809 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563815 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563820 | orchestrator | 2025-11-23 00:52:05.563825 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-23 00:52:05.563831 | orchestrator | Sunday 23 November 2025 00:49:47 +0000 (0:00:00.669) 0:08:01.575 ******* 2025-11-23 00:52:05.563836 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.563841 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.563847 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.563852 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563857 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563863 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563868 | orchestrator | 2025-11-23 00:52:05.563877 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-23 00:52:05.563883 | orchestrator | Sunday 23 November 2025 00:49:48 +0000 (0:00:00.505) 0:08:02.080 ******* 2025-11-23 00:52:05.563888 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563894 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563899 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563904 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563910 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563915 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563920 | orchestrator | 2025-11-23 00:52:05.563925 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-23 00:52:05.563931 | orchestrator | Sunday 23 November 2025 00:49:48 +0000 (0:00:00.647) 0:08:02.727 ******* 2025-11-23 00:52:05.563936 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563942 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563947 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.563952 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:52:05.563957 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:52:05.563963 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:52:05.563968 | orchestrator | 2025-11-23 00:52:05.563973 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-23 00:52:05.563979 | orchestrator | Sunday 23 November 2025 00:49:49 +0000 (0:00:00.497) 0:08:03.225 ******* 2025-11-23 00:52:05.563984 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.563989 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.563995 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.564000 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.564009 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.564014 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.564020 | orchestrator | 2025-11-23 00:52:05.564025 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-23 00:52:05.564030 | orchestrator | Sunday 23 November 2025 00:49:49 +0000 (0:00:00.660) 0:08:03.885 ******* 2025-11-23 00:52:05.564036 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564041 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564046 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564051 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.564057 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.564062 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.564067 | orchestrator | 2025-11-23 00:52:05.564072 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-23 00:52:05.564078 | orchestrator | Sunday 23 November 2025 00:49:50 +0000 (0:00:00.548) 0:08:04.433 ******* 2025-11-23 00:52:05.564083 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564088 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564094 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564099 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.564104 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.564109 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.564118 | orchestrator | 2025-11-23 00:52:05.564124 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-11-23 00:52:05.564129 | orchestrator | Sunday 23 November 2025 00:49:51 +0000 (0:00:01.056) 0:08:05.490 ******* 2025-11-23 00:52:05.564134 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.564141 | orchestrator | 2025-11-23 00:52:05.564151 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-11-23 00:52:05.564156 | orchestrator | Sunday 23 November 2025 00:49:55 +0000 (0:00:03.925) 0:08:09.416 ******* 2025-11-23 00:52:05.564161 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.564167 | orchestrator | 2025-11-23 00:52:05.564172 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-11-23 00:52:05.564178 | orchestrator | Sunday 23 November 2025 00:49:57 +0000 (0:00:02.046) 0:08:11.463 ******* 2025-11-23 00:52:05.564183 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.564188 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.564194 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.564199 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.564205 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.564210 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.564215 | orchestrator | 2025-11-23 00:52:05.564221 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-11-23 00:52:05.564226 | orchestrator | Sunday 23 November 2025 00:49:59 +0000 (0:00:01.542) 0:08:13.005 ******* 2025-11-23 00:52:05.564232 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.564237 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.564242 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.564248 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.564253 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.564258 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.564264 | orchestrator | 2025-11-23 00:52:05.564269 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-11-23 00:52:05.564274 | orchestrator | Sunday 23 November 2025 00:50:00 +0000 (0:00:01.150) 0:08:14.155 ******* 2025-11-23 00:52:05.564280 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.564286 | orchestrator | 2025-11-23 00:52:05.564291 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-11-23 00:52:05.564297 | orchestrator | Sunday 23 November 2025 00:50:01 +0000 (0:00:01.110) 0:08:15.266 ******* 2025-11-23 00:52:05.564302 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.564307 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.564313 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.564318 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.564323 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.564329 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.564334 | orchestrator | 2025-11-23 00:52:05.564339 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-11-23 00:52:05.564345 | orchestrator | Sunday 23 November 2025 00:50:02 +0000 (0:00:01.561) 0:08:16.828 ******* 2025-11-23 00:52:05.564350 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.564356 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.564361 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.564366 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.564374 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.564392 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.564397 | orchestrator | 2025-11-23 00:52:05.564403 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-11-23 00:52:05.564408 | orchestrator | Sunday 23 November 2025 00:50:05 +0000 (0:00:02.787) 0:08:19.615 ******* 2025-11-23 00:52:05.564413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:52:05.564423 | orchestrator | 2025-11-23 00:52:05.564428 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-11-23 00:52:05.564433 | orchestrator | Sunday 23 November 2025 00:50:06 +0000 (0:00:01.045) 0:08:20.661 ******* 2025-11-23 00:52:05.564439 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564444 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564449 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564455 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.564460 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.564465 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.564470 | orchestrator | 2025-11-23 00:52:05.564476 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-11-23 00:52:05.564481 | orchestrator | Sunday 23 November 2025 00:50:07 +0000 (0:00:00.648) 0:08:21.309 ******* 2025-11-23 00:52:05.564486 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.564492 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.564497 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.564502 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:52:05.564511 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:52:05.564516 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:52:05.564521 | orchestrator | 2025-11-23 00:52:05.564527 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-11-23 00:52:05.564532 | orchestrator | Sunday 23 November 2025 00:50:09 +0000 (0:00:01.992) 0:08:23.302 ******* 2025-11-23 00:52:05.564538 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564543 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564548 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564553 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:52:05.564559 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:52:05.564564 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:52:05.564569 | orchestrator | 2025-11-23 00:52:05.564574 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-11-23 00:52:05.564580 | orchestrator | 2025-11-23 00:52:05.564585 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-23 00:52:05.564590 | orchestrator | Sunday 23 November 2025 00:50:10 +0000 (0:00:00.864) 0:08:24.166 ******* 2025-11-23 00:52:05.564596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.564601 | orchestrator | 2025-11-23 00:52:05.564607 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-23 00:52:05.564612 | orchestrator | Sunday 23 November 2025 00:50:10 +0000 (0:00:00.441) 0:08:24.608 ******* 2025-11-23 00:52:05.564617 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.564623 | orchestrator | 2025-11-23 00:52:05.564628 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-23 00:52:05.564633 | orchestrator | Sunday 23 November 2025 00:50:11 +0000 (0:00:00.612) 0:08:25.220 ******* 2025-11-23 00:52:05.564639 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.564644 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.564649 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.564655 | orchestrator | 2025-11-23 00:52:05.564660 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-23 00:52:05.564665 | orchestrator | Sunday 23 November 2025 00:50:11 +0000 (0:00:00.267) 0:08:25.488 ******* 2025-11-23 00:52:05.564671 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564676 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564681 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564686 | orchestrator | 2025-11-23 00:52:05.564692 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-23 00:52:05.564697 | orchestrator | Sunday 23 November 2025 00:50:12 +0000 (0:00:00.610) 0:08:26.099 ******* 2025-11-23 00:52:05.564702 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564711 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564717 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564722 | orchestrator | 2025-11-23 00:52:05.564728 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-23 00:52:05.564733 | orchestrator | Sunday 23 November 2025 00:50:13 +0000 (0:00:00.815) 0:08:26.914 ******* 2025-11-23 00:52:05.564738 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564743 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564749 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564754 | orchestrator | 2025-11-23 00:52:05.564759 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-23 00:52:05.564765 | orchestrator | Sunday 23 November 2025 00:50:13 +0000 (0:00:00.620) 0:08:27.534 ******* 2025-11-23 00:52:05.564770 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.564775 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.564781 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.564786 | orchestrator | 2025-11-23 00:52:05.564791 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-23 00:52:05.564796 | orchestrator | Sunday 23 November 2025 00:50:13 +0000 (0:00:00.274) 0:08:27.808 ******* 2025-11-23 00:52:05.564802 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.564807 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.564812 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.564817 | orchestrator | 2025-11-23 00:52:05.564823 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-23 00:52:05.564828 | orchestrator | Sunday 23 November 2025 00:50:14 +0000 (0:00:00.253) 0:08:28.062 ******* 2025-11-23 00:52:05.564833 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.564839 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.564844 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.564849 | orchestrator | 2025-11-23 00:52:05.564857 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-23 00:52:05.564863 | orchestrator | Sunday 23 November 2025 00:50:14 +0000 (0:00:00.417) 0:08:28.479 ******* 2025-11-23 00:52:05.564868 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564873 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564879 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564884 | orchestrator | 2025-11-23 00:52:05.564889 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-23 00:52:05.564894 | orchestrator | Sunday 23 November 2025 00:50:15 +0000 (0:00:00.707) 0:08:29.187 ******* 2025-11-23 00:52:05.564900 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.564905 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.564910 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.564916 | orchestrator | 2025-11-23 00:52:05.564921 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-23 00:52:05.564926 | orchestrator | Sunday 23 November 2025 00:50:15 +0000 (0:00:00.689) 0:08:29.876 ******* 2025-11-23 00:52:05.564932 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.564937 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.564942 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.564947 | orchestrator | 2025-11-23 00:52:05.564953 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-23 00:52:05.564958 | orchestrator | Sunday 23 November 2025 00:50:16 +0000 (0:00:00.297) 0:08:30.174 ******* 2025-11-23 00:52:05.564963 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.564969 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.564974 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.564979 | orchestrator | 2025-11-23 00:52:05.564987 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-23 00:52:05.564993 | orchestrator | Sunday 23 November 2025 00:50:16 +0000 (0:00:00.493) 0:08:30.668 ******* 2025-11-23 00:52:05.564998 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.565004 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.565012 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.565018 | orchestrator | 2025-11-23 00:52:05.565023 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-23 00:52:05.565029 | orchestrator | Sunday 23 November 2025 00:50:17 +0000 (0:00:00.329) 0:08:30.997 ******* 2025-11-23 00:52:05.565034 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.565039 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.565044 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.565050 | orchestrator | 2025-11-23 00:52:05.565055 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-23 00:52:05.565061 | orchestrator | Sunday 23 November 2025 00:50:17 +0000 (0:00:00.324) 0:08:31.321 ******* 2025-11-23 00:52:05.565066 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.565071 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.565076 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.565082 | orchestrator | 2025-11-23 00:52:05.565087 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-23 00:52:05.565092 | orchestrator | Sunday 23 November 2025 00:50:17 +0000 (0:00:00.321) 0:08:31.643 ******* 2025-11-23 00:52:05.565098 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.565103 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.565108 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.565114 | orchestrator | 2025-11-23 00:52:05.565119 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-23 00:52:05.565124 | orchestrator | Sunday 23 November 2025 00:50:18 +0000 (0:00:00.481) 0:08:32.124 ******* 2025-11-23 00:52:05.565129 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.565135 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.565140 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.565145 | orchestrator | 2025-11-23 00:52:05.565151 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-23 00:52:05.565156 | orchestrator | Sunday 23 November 2025 00:50:18 +0000 (0:00:00.277) 0:08:32.402 ******* 2025-11-23 00:52:05.565161 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.565167 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.565172 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.565177 | orchestrator | 2025-11-23 00:52:05.565182 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-23 00:52:05.565188 | orchestrator | Sunday 23 November 2025 00:50:18 +0000 (0:00:00.276) 0:08:32.679 ******* 2025-11-23 00:52:05.565193 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.565198 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.565204 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.565209 | orchestrator | 2025-11-23 00:52:05.565214 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-23 00:52:05.565220 | orchestrator | Sunday 23 November 2025 00:50:19 +0000 (0:00:00.291) 0:08:32.970 ******* 2025-11-23 00:52:05.565225 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.565230 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.565235 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.565241 | orchestrator | 2025-11-23 00:52:05.565246 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-11-23 00:52:05.565251 | orchestrator | Sunday 23 November 2025 00:50:19 +0000 (0:00:00.661) 0:08:33.631 ******* 2025-11-23 00:52:05.565257 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.565262 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.565267 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-11-23 00:52:05.565272 | orchestrator | 2025-11-23 00:52:05.565278 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-11-23 00:52:05.565283 | orchestrator | Sunday 23 November 2025 00:50:20 +0000 (0:00:00.363) 0:08:33.994 ******* 2025-11-23 00:52:05.565288 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.565294 | orchestrator | 2025-11-23 00:52:05.565299 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-11-23 00:52:05.565308 | orchestrator | Sunday 23 November 2025 00:50:22 +0000 (0:00:02.182) 0:08:36.176 ******* 2025-11-23 00:52:05.565317 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-11-23 00:52:05.565324 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.565330 | orchestrator | 2025-11-23 00:52:05.565335 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-11-23 00:52:05.565340 | orchestrator | Sunday 23 November 2025 00:50:22 +0000 (0:00:00.178) 0:08:36.355 ******* 2025-11-23 00:52:05.565347 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-23 00:52:05.565357 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-23 00:52:05.565363 | orchestrator | 2025-11-23 00:52:05.565368 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-11-23 00:52:05.565374 | orchestrator | Sunday 23 November 2025 00:50:31 +0000 (0:00:09.105) 0:08:45.460 ******* 2025-11-23 00:52:05.565398 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-23 00:52:05.565403 | orchestrator | 2025-11-23 00:52:05.565412 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-11-23 00:52:05.565417 | orchestrator | Sunday 23 November 2025 00:50:34 +0000 (0:00:03.325) 0:08:48.786 ******* 2025-11-23 00:52:05.565423 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.565428 | orchestrator | 2025-11-23 00:52:05.565434 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-11-23 00:52:05.565439 | orchestrator | Sunday 23 November 2025 00:50:35 +0000 (0:00:00.508) 0:08:49.294 ******* 2025-11-23 00:52:05.565444 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-23 00:52:05.565450 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-23 00:52:05.565455 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-23 00:52:05.565460 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-11-23 00:52:05.565466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-11-23 00:52:05.565471 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-11-23 00:52:05.565477 | orchestrator | 2025-11-23 00:52:05.565482 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-11-23 00:52:05.565487 | orchestrator | Sunday 23 November 2025 00:50:36 +0000 (0:00:01.113) 0:08:50.407 ******* 2025-11-23 00:52:05.565493 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.565498 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-23 00:52:05.565503 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-23 00:52:05.565509 | orchestrator | 2025-11-23 00:52:05.565514 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-11-23 00:52:05.565520 | orchestrator | Sunday 23 November 2025 00:50:38 +0000 (0:00:02.092) 0:08:52.500 ******* 2025-11-23 00:52:05.565525 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-23 00:52:05.565530 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-23 00:52:05.565536 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565541 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-23 00:52:05.565551 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-23 00:52:05.565556 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565562 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-23 00:52:05.565567 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-23 00:52:05.565572 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565578 | orchestrator | 2025-11-23 00:52:05.565583 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-11-23 00:52:05.565588 | orchestrator | Sunday 23 November 2025 00:50:39 +0000 (0:00:01.312) 0:08:53.812 ******* 2025-11-23 00:52:05.565594 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565599 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565604 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565610 | orchestrator | 2025-11-23 00:52:05.565615 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-11-23 00:52:05.565621 | orchestrator | Sunday 23 November 2025 00:50:42 +0000 (0:00:02.781) 0:08:56.594 ******* 2025-11-23 00:52:05.565626 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.565631 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.565637 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.565642 | orchestrator | 2025-11-23 00:52:05.565647 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-11-23 00:52:05.565653 | orchestrator | Sunday 23 November 2025 00:50:43 +0000 (0:00:00.582) 0:08:57.177 ******* 2025-11-23 00:52:05.565658 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.565664 | orchestrator | 2025-11-23 00:52:05.565669 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-11-23 00:52:05.565674 | orchestrator | Sunday 23 November 2025 00:50:44 +0000 (0:00:01.210) 0:08:58.387 ******* 2025-11-23 00:52:05.565680 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.565685 | orchestrator | 2025-11-23 00:52:05.565690 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-11-23 00:52:05.565699 | orchestrator | Sunday 23 November 2025 00:50:45 +0000 (0:00:00.565) 0:08:58.953 ******* 2025-11-23 00:52:05.565705 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565710 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565715 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565721 | orchestrator | 2025-11-23 00:52:05.565726 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-11-23 00:52:05.565731 | orchestrator | Sunday 23 November 2025 00:50:46 +0000 (0:00:01.249) 0:09:00.203 ******* 2025-11-23 00:52:05.565736 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565742 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565747 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565752 | orchestrator | 2025-11-23 00:52:05.565758 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-11-23 00:52:05.565763 | orchestrator | Sunday 23 November 2025 00:50:47 +0000 (0:00:01.476) 0:09:01.679 ******* 2025-11-23 00:52:05.565768 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565774 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565779 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565784 | orchestrator | 2025-11-23 00:52:05.565790 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-11-23 00:52:05.565795 | orchestrator | Sunday 23 November 2025 00:50:49 +0000 (0:00:01.857) 0:09:03.537 ******* 2025-11-23 00:52:05.565800 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565806 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565811 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565816 | orchestrator | 2025-11-23 00:52:05.565824 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-11-23 00:52:05.565830 | orchestrator | Sunday 23 November 2025 00:50:51 +0000 (0:00:02.035) 0:09:05.572 ******* 2025-11-23 00:52:05.565839 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.565845 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.565850 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.565856 | orchestrator | 2025-11-23 00:52:05.565861 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-23 00:52:05.565866 | orchestrator | Sunday 23 November 2025 00:50:53 +0000 (0:00:01.381) 0:09:06.954 ******* 2025-11-23 00:52:05.565872 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565877 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565882 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565888 | orchestrator | 2025-11-23 00:52:05.565893 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-23 00:52:05.565898 | orchestrator | Sunday 23 November 2025 00:50:53 +0000 (0:00:00.648) 0:09:07.603 ******* 2025-11-23 00:52:05.565904 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.565909 | orchestrator | 2025-11-23 00:52:05.565915 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-23 00:52:05.565920 | orchestrator | Sunday 23 November 2025 00:50:54 +0000 (0:00:00.613) 0:09:08.216 ******* 2025-11-23 00:52:05.565925 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.565931 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.565936 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.565941 | orchestrator | 2025-11-23 00:52:05.565947 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-23 00:52:05.565952 | orchestrator | Sunday 23 November 2025 00:50:54 +0000 (0:00:00.275) 0:09:08.491 ******* 2025-11-23 00:52:05.565957 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.565963 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.565968 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.565974 | orchestrator | 2025-11-23 00:52:05.565979 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-23 00:52:05.565984 | orchestrator | Sunday 23 November 2025 00:50:55 +0000 (0:00:01.139) 0:09:09.631 ******* 2025-11-23 00:52:05.565990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.565995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.566000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.566006 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566011 | orchestrator | 2025-11-23 00:52:05.566036 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-23 00:52:05.566042 | orchestrator | Sunday 23 November 2025 00:50:56 +0000 (0:00:00.818) 0:09:10.449 ******* 2025-11-23 00:52:05.566047 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566052 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566058 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566063 | orchestrator | 2025-11-23 00:52:05.566069 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-23 00:52:05.566074 | orchestrator | 2025-11-23 00:52:05.566080 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-23 00:52:05.566085 | orchestrator | Sunday 23 November 2025 00:50:57 +0000 (0:00:00.764) 0:09:11.214 ******* 2025-11-23 00:52:05.566091 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.566096 | orchestrator | 2025-11-23 00:52:05.566102 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-23 00:52:05.566107 | orchestrator | Sunday 23 November 2025 00:50:57 +0000 (0:00:00.436) 0:09:11.650 ******* 2025-11-23 00:52:05.566113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.566118 | orchestrator | 2025-11-23 00:52:05.566124 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-23 00:52:05.566129 | orchestrator | Sunday 23 November 2025 00:50:58 +0000 (0:00:00.583) 0:09:12.233 ******* 2025-11-23 00:52:05.566141 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566146 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566152 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566157 | orchestrator | 2025-11-23 00:52:05.566163 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-23 00:52:05.566171 | orchestrator | Sunday 23 November 2025 00:50:58 +0000 (0:00:00.259) 0:09:12.493 ******* 2025-11-23 00:52:05.566177 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566183 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566188 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566194 | orchestrator | 2025-11-23 00:52:05.566199 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-23 00:52:05.566205 | orchestrator | Sunday 23 November 2025 00:50:59 +0000 (0:00:00.652) 0:09:13.146 ******* 2025-11-23 00:52:05.566210 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566216 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566221 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566226 | orchestrator | 2025-11-23 00:52:05.566232 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-23 00:52:05.566237 | orchestrator | Sunday 23 November 2025 00:50:59 +0000 (0:00:00.711) 0:09:13.858 ******* 2025-11-23 00:52:05.566243 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566248 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566254 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566259 | orchestrator | 2025-11-23 00:52:05.566265 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-23 00:52:05.566270 | orchestrator | Sunday 23 November 2025 00:51:00 +0000 (0:00:00.872) 0:09:14.730 ******* 2025-11-23 00:52:05.566276 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566281 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566287 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566292 | orchestrator | 2025-11-23 00:52:05.566298 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-23 00:52:05.566306 | orchestrator | Sunday 23 November 2025 00:51:01 +0000 (0:00:00.284) 0:09:15.015 ******* 2025-11-23 00:52:05.566312 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566317 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566323 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566328 | orchestrator | 2025-11-23 00:52:05.566333 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-23 00:52:05.566339 | orchestrator | Sunday 23 November 2025 00:51:01 +0000 (0:00:00.269) 0:09:15.285 ******* 2025-11-23 00:52:05.566344 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566350 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566355 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566360 | orchestrator | 2025-11-23 00:52:05.566366 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-23 00:52:05.566371 | orchestrator | Sunday 23 November 2025 00:51:01 +0000 (0:00:00.258) 0:09:15.543 ******* 2025-11-23 00:52:05.566393 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566399 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566404 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566409 | orchestrator | 2025-11-23 00:52:05.566415 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-23 00:52:05.566420 | orchestrator | Sunday 23 November 2025 00:51:02 +0000 (0:00:00.855) 0:09:16.399 ******* 2025-11-23 00:52:05.566426 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566431 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566436 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566442 | orchestrator | 2025-11-23 00:52:05.566447 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-23 00:52:05.566453 | orchestrator | Sunday 23 November 2025 00:51:03 +0000 (0:00:00.666) 0:09:17.066 ******* 2025-11-23 00:52:05.566458 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566468 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566473 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566479 | orchestrator | 2025-11-23 00:52:05.566484 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-23 00:52:05.566489 | orchestrator | Sunday 23 November 2025 00:51:03 +0000 (0:00:00.273) 0:09:17.339 ******* 2025-11-23 00:52:05.566495 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566500 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566505 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566511 | orchestrator | 2025-11-23 00:52:05.566516 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-23 00:52:05.566521 | orchestrator | Sunday 23 November 2025 00:51:03 +0000 (0:00:00.273) 0:09:17.613 ******* 2025-11-23 00:52:05.566527 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566535 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566545 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566550 | orchestrator | 2025-11-23 00:52:05.566556 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-23 00:52:05.566561 | orchestrator | Sunday 23 November 2025 00:51:04 +0000 (0:00:00.437) 0:09:18.050 ******* 2025-11-23 00:52:05.566566 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566572 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566577 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566582 | orchestrator | 2025-11-23 00:52:05.566588 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-23 00:52:05.566593 | orchestrator | Sunday 23 November 2025 00:51:04 +0000 (0:00:00.269) 0:09:18.320 ******* 2025-11-23 00:52:05.566599 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566604 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566609 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566615 | orchestrator | 2025-11-23 00:52:05.566620 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-23 00:52:05.566626 | orchestrator | Sunday 23 November 2025 00:51:04 +0000 (0:00:00.287) 0:09:18.607 ******* 2025-11-23 00:52:05.566631 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566637 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566642 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566647 | orchestrator | 2025-11-23 00:52:05.566653 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-23 00:52:05.566658 | orchestrator | Sunday 23 November 2025 00:51:04 +0000 (0:00:00.283) 0:09:18.891 ******* 2025-11-23 00:52:05.566664 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566669 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566674 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566680 | orchestrator | 2025-11-23 00:52:05.566685 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-23 00:52:05.566690 | orchestrator | Sunday 23 November 2025 00:51:05 +0000 (0:00:00.429) 0:09:19.320 ******* 2025-11-23 00:52:05.566696 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566701 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566709 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566715 | orchestrator | 2025-11-23 00:52:05.566720 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-23 00:52:05.566726 | orchestrator | Sunday 23 November 2025 00:51:05 +0000 (0:00:00.269) 0:09:19.590 ******* 2025-11-23 00:52:05.566731 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566737 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566742 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566747 | orchestrator | 2025-11-23 00:52:05.566753 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-23 00:52:05.566758 | orchestrator | Sunday 23 November 2025 00:51:06 +0000 (0:00:00.320) 0:09:19.911 ******* 2025-11-23 00:52:05.566763 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.566769 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.566774 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.566783 | orchestrator | 2025-11-23 00:52:05.566789 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-11-23 00:52:05.566794 | orchestrator | Sunday 23 November 2025 00:51:06 +0000 (0:00:00.664) 0:09:20.575 ******* 2025-11-23 00:52:05.566800 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.566805 | orchestrator | 2025-11-23 00:52:05.566810 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-23 00:52:05.566816 | orchestrator | Sunday 23 November 2025 00:51:07 +0000 (0:00:00.476) 0:09:21.052 ******* 2025-11-23 00:52:05.566825 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.566830 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-23 00:52:05.566835 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-23 00:52:05.566841 | orchestrator | 2025-11-23 00:52:05.566846 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-23 00:52:05.566851 | orchestrator | Sunday 23 November 2025 00:51:09 +0000 (0:00:02.175) 0:09:23.228 ******* 2025-11-23 00:52:05.566857 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-23 00:52:05.566862 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-23 00:52:05.566867 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.566872 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-23 00:52:05.566878 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-23 00:52:05.566883 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.566888 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-23 00:52:05.566893 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-23 00:52:05.566899 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.566904 | orchestrator | 2025-11-23 00:52:05.566909 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-11-23 00:52:05.566914 | orchestrator | Sunday 23 November 2025 00:51:10 +0000 (0:00:01.250) 0:09:24.478 ******* 2025-11-23 00:52:05.566920 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.566925 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.566930 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.566935 | orchestrator | 2025-11-23 00:52:05.566941 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-11-23 00:52:05.566946 | orchestrator | Sunday 23 November 2025 00:51:10 +0000 (0:00:00.265) 0:09:24.744 ******* 2025-11-23 00:52:05.566951 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.566957 | orchestrator | 2025-11-23 00:52:05.566962 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-11-23 00:52:05.566967 | orchestrator | Sunday 23 November 2025 00:51:11 +0000 (0:00:00.473) 0:09:25.217 ******* 2025-11-23 00:52:05.566973 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.566978 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.566984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.566989 | orchestrator | 2025-11-23 00:52:05.566994 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-11-23 00:52:05.567000 | orchestrator | Sunday 23 November 2025 00:51:12 +0000 (0:00:01.079) 0:09:26.297 ******* 2025-11-23 00:52:05.567005 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.567010 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-23 00:52:05.567021 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.567026 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-23 00:52:05.567031 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.567037 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-23 00:52:05.567042 | orchestrator | 2025-11-23 00:52:05.567047 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-23 00:52:05.567053 | orchestrator | Sunday 23 November 2025 00:51:16 +0000 (0:00:04.528) 0:09:30.826 ******* 2025-11-23 00:52:05.567061 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.567066 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-23 00:52:05.567072 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.567077 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-23 00:52:05.567082 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:52:05.567088 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-23 00:52:05.567093 | orchestrator | 2025-11-23 00:52:05.567098 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-23 00:52:05.567103 | orchestrator | Sunday 23 November 2025 00:51:19 +0000 (0:00:02.289) 0:09:33.116 ******* 2025-11-23 00:52:05.567109 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-23 00:52:05.567114 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.567119 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-23 00:52:05.567125 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.567130 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-23 00:52:05.567135 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.567140 | orchestrator | 2025-11-23 00:52:05.567146 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-11-23 00:52:05.567151 | orchestrator | Sunday 23 November 2025 00:51:20 +0000 (0:00:01.132) 0:09:34.248 ******* 2025-11-23 00:52:05.567159 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-11-23 00:52:05.567164 | orchestrator | 2025-11-23 00:52:05.567170 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-11-23 00:52:05.567175 | orchestrator | Sunday 23 November 2025 00:51:20 +0000 (0:00:00.214) 0:09:34.462 ******* 2025-11-23 00:52:05.567180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567208 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.567213 | orchestrator | 2025-11-23 00:52:05.567218 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-11-23 00:52:05.567224 | orchestrator | Sunday 23 November 2025 00:51:21 +0000 (0:00:00.928) 0:09:35.391 ******* 2025-11-23 00:52:05.567229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-23 00:52:05.567261 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.567266 | orchestrator | 2025-11-23 00:52:05.567272 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-11-23 00:52:05.567277 | orchestrator | Sunday 23 November 2025 00:51:22 +0000 (0:00:00.521) 0:09:35.913 ******* 2025-11-23 00:52:05.567283 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-23 00:52:05.567288 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-23 00:52:05.567293 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-23 00:52:05.567299 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-23 00:52:05.567304 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-23 00:52:05.567310 | orchestrator | 2025-11-23 00:52:05.567315 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-11-23 00:52:05.567320 | orchestrator | Sunday 23 November 2025 00:51:54 +0000 (0:00:32.004) 0:10:07.917 ******* 2025-11-23 00:52:05.567326 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.567331 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.567340 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.567345 | orchestrator | 2025-11-23 00:52:05.567351 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-11-23 00:52:05.567356 | orchestrator | Sunday 23 November 2025 00:51:54 +0000 (0:00:00.289) 0:10:08.207 ******* 2025-11-23 00:52:05.567361 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.567367 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.567372 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.567406 | orchestrator | 2025-11-23 00:52:05.567413 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-11-23 00:52:05.567418 | orchestrator | Sunday 23 November 2025 00:51:54 +0000 (0:00:00.289) 0:10:08.496 ******* 2025-11-23 00:52:05.567424 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.567429 | orchestrator | 2025-11-23 00:52:05.567434 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-11-23 00:52:05.567440 | orchestrator | Sunday 23 November 2025 00:51:55 +0000 (0:00:00.662) 0:10:09.158 ******* 2025-11-23 00:52:05.567445 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.567451 | orchestrator | 2025-11-23 00:52:05.567456 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-11-23 00:52:05.567461 | orchestrator | Sunday 23 November 2025 00:51:55 +0000 (0:00:00.475) 0:10:09.634 ******* 2025-11-23 00:52:05.567470 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.567476 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.567486 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.567497 | orchestrator | 2025-11-23 00:52:05.567502 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-11-23 00:52:05.567507 | orchestrator | Sunday 23 November 2025 00:51:56 +0000 (0:00:01.219) 0:10:10.854 ******* 2025-11-23 00:52:05.567513 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.567518 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.567523 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.567529 | orchestrator | 2025-11-23 00:52:05.567534 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-11-23 00:52:05.567539 | orchestrator | Sunday 23 November 2025 00:51:58 +0000 (0:00:01.295) 0:10:12.149 ******* 2025-11-23 00:52:05.567545 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:52:05.567550 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:52:05.567555 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:52:05.567561 | orchestrator | 2025-11-23 00:52:05.567566 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-11-23 00:52:05.567571 | orchestrator | Sunday 23 November 2025 00:52:00 +0000 (0:00:01.834) 0:10:13.984 ******* 2025-11-23 00:52:05.567577 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.567582 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.567588 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-23 00:52:05.567593 | orchestrator | 2025-11-23 00:52:05.567598 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-23 00:52:05.567603 | orchestrator | Sunday 23 November 2025 00:52:02 +0000 (0:00:02.530) 0:10:16.514 ******* 2025-11-23 00:52:05.567609 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.567614 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.567619 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.567625 | orchestrator | 2025-11-23 00:52:05.567630 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-23 00:52:05.567635 | orchestrator | Sunday 23 November 2025 00:52:02 +0000 (0:00:00.310) 0:10:16.824 ******* 2025-11-23 00:52:05.567640 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:52:05.567646 | orchestrator | 2025-11-23 00:52:05.567651 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-23 00:52:05.567656 | orchestrator | Sunday 23 November 2025 00:52:03 +0000 (0:00:00.468) 0:10:17.293 ******* 2025-11-23 00:52:05.567662 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.567697 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.567704 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.567709 | orchestrator | 2025-11-23 00:52:05.567714 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-23 00:52:05.567720 | orchestrator | Sunday 23 November 2025 00:52:03 +0000 (0:00:00.443) 0:10:17.737 ******* 2025-11-23 00:52:05.567725 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.567730 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:52:05.567736 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:52:05.567741 | orchestrator | 2025-11-23 00:52:05.567746 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-23 00:52:05.567751 | orchestrator | Sunday 23 November 2025 00:52:04 +0000 (0:00:00.296) 0:10:18.033 ******* 2025-11-23 00:52:05.567757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:52:05.567762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:52:05.567767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:52:05.567773 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:52:05.567778 | orchestrator | 2025-11-23 00:52:05.567783 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-23 00:52:05.567805 | orchestrator | Sunday 23 November 2025 00:52:04 +0000 (0:00:00.551) 0:10:18.585 ******* 2025-11-23 00:52:05.567810 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:52:05.567816 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:52:05.567821 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:52:05.567826 | orchestrator | 2025-11-23 00:52:05.567835 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:52:05.567840 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-11-23 00:52:05.567846 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-11-23 00:52:05.567851 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-11-23 00:52:05.567857 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-11-23 00:52:05.567862 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-11-23 00:52:05.567867 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-11-23 00:52:05.567873 | orchestrator | 2025-11-23 00:52:05.567878 | orchestrator | 2025-11-23 00:52:05.567883 | orchestrator | 2025-11-23 00:52:05.567892 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:52:05.567898 | orchestrator | Sunday 23 November 2025 00:52:04 +0000 (0:00:00.233) 0:10:18.818 ******* 2025-11-23 00:52:05.567903 | orchestrator | =============================================================================== 2025-11-23 00:52:05.567909 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.65s 2025-11-23 00:52:05.567914 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.71s 2025-11-23 00:52:05.567919 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.00s 2025-11-23 00:52:05.567925 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.21s 2025-11-23 00:52:05.567930 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.10s 2025-11-23 00:52:05.567935 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.23s 2025-11-23 00:52:05.567941 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.43s 2025-11-23 00:52:05.567946 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.65s 2025-11-23 00:52:05.567951 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.11s 2025-11-23 00:52:05.567957 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.76s 2025-11-23 00:52:05.567962 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.49s 2025-11-23 00:52:05.567967 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.04s 2025-11-23 00:52:05.567973 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.53s 2025-11-23 00:52:05.567978 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.93s 2025-11-23 00:52:05.567983 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.65s 2025-11-23 00:52:05.567989 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.61s 2025-11-23 00:52:05.567994 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.33s 2025-11-23 00:52:05.567999 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.30s 2025-11-23 00:52:05.568005 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.17s 2025-11-23 00:52:05.568014 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.12s 2025-11-23 00:52:05.568019 | orchestrator | 2025-11-23 00:52:05 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:05.568025 | orchestrator | 2025-11-23 00:52:05 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:05.568030 | orchestrator | 2025-11-23 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:08.597662 | orchestrator | 2025-11-23 00:52:08 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:08.598533 | orchestrator | 2025-11-23 00:52:08 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:08.600607 | orchestrator | 2025-11-23 00:52:08 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:08.600637 | orchestrator | 2025-11-23 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:11.644017 | orchestrator | 2025-11-23 00:52:11 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:11.646844 | orchestrator | 2025-11-23 00:52:11 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:11.647885 | orchestrator | 2025-11-23 00:52:11 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:11.647918 | orchestrator | 2025-11-23 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:14.697328 | orchestrator | 2025-11-23 00:52:14 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:14.698592 | orchestrator | 2025-11-23 00:52:14 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:14.700218 | orchestrator | 2025-11-23 00:52:14 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:14.700252 | orchestrator | 2025-11-23 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:17.747083 | orchestrator | 2025-11-23 00:52:17 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:17.747546 | orchestrator | 2025-11-23 00:52:17 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:17.749548 | orchestrator | 2025-11-23 00:52:17 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:17.749934 | orchestrator | 2025-11-23 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:20.788148 | orchestrator | 2025-11-23 00:52:20 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:20.789756 | orchestrator | 2025-11-23 00:52:20 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:20.790913 | orchestrator | 2025-11-23 00:52:20 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:20.791180 | orchestrator | 2025-11-23 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:23.829423 | orchestrator | 2025-11-23 00:52:23 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:23.829700 | orchestrator | 2025-11-23 00:52:23 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:23.830807 | orchestrator | 2025-11-23 00:52:23 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:23.830828 | orchestrator | 2025-11-23 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:26.869198 | orchestrator | 2025-11-23 00:52:26 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:26.869764 | orchestrator | 2025-11-23 00:52:26 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:26.870782 | orchestrator | 2025-11-23 00:52:26 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:26.870805 | orchestrator | 2025-11-23 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:29.908440 | orchestrator | 2025-11-23 00:52:29 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:29.909683 | orchestrator | 2025-11-23 00:52:29 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:29.910898 | orchestrator | 2025-11-23 00:52:29 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:29.910927 | orchestrator | 2025-11-23 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:32.950597 | orchestrator | 2025-11-23 00:52:32 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:32.952125 | orchestrator | 2025-11-23 00:52:32 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:32.953928 | orchestrator | 2025-11-23 00:52:32 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:32.954178 | orchestrator | 2025-11-23 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:36.015665 | orchestrator | 2025-11-23 00:52:36 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:36.021381 | orchestrator | 2025-11-23 00:52:36 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:36.027801 | orchestrator | 2025-11-23 00:52:36 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:36.027868 | orchestrator | 2025-11-23 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:39.073573 | orchestrator | 2025-11-23 00:52:39 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:39.074686 | orchestrator | 2025-11-23 00:52:39 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:39.076122 | orchestrator | 2025-11-23 00:52:39 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:39.076164 | orchestrator | 2025-11-23 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:42.117193 | orchestrator | 2025-11-23 00:52:42 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:42.118467 | orchestrator | 2025-11-23 00:52:42 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:42.120232 | orchestrator | 2025-11-23 00:52:42 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:42.120255 | orchestrator | 2025-11-23 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:45.161530 | orchestrator | 2025-11-23 00:52:45 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:45.162992 | orchestrator | 2025-11-23 00:52:45 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:45.164352 | orchestrator | 2025-11-23 00:52:45 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:45.164493 | orchestrator | 2025-11-23 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:48.205966 | orchestrator | 2025-11-23 00:52:48 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:48.208239 | orchestrator | 2025-11-23 00:52:48 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:48.210351 | orchestrator | 2025-11-23 00:52:48 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:48.210411 | orchestrator | 2025-11-23 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:51.249764 | orchestrator | 2025-11-23 00:52:51 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:51.251233 | orchestrator | 2025-11-23 00:52:51 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:51.252806 | orchestrator | 2025-11-23 00:52:51 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:51.252867 | orchestrator | 2025-11-23 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:54.294699 | orchestrator | 2025-11-23 00:52:54 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:54.296722 | orchestrator | 2025-11-23 00:52:54 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:54.298294 | orchestrator | 2025-11-23 00:52:54 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:54.298346 | orchestrator | 2025-11-23 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:52:57.333053 | orchestrator | 2025-11-23 00:52:57 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:52:57.334311 | orchestrator | 2025-11-23 00:52:57 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:52:57.336139 | orchestrator | 2025-11-23 00:52:57 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:52:57.336198 | orchestrator | 2025-11-23 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:00.380200 | orchestrator | 2025-11-23 00:53:00 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:00.381094 | orchestrator | 2025-11-23 00:53:00 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:53:00.382624 | orchestrator | 2025-11-23 00:53:00 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:53:00.382689 | orchestrator | 2025-11-23 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:03.421208 | orchestrator | 2025-11-23 00:53:03 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:03.423293 | orchestrator | 2025-11-23 00:53:03 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:53:03.425121 | orchestrator | 2025-11-23 00:53:03 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:53:03.425147 | orchestrator | 2025-11-23 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:06.465711 | orchestrator | 2025-11-23 00:53:06 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:06.466656 | orchestrator | 2025-11-23 00:53:06 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:53:06.468983 | orchestrator | 2025-11-23 00:53:06 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state STARTED 2025-11-23 00:53:06.469181 | orchestrator | 2025-11-23 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:09.516132 | orchestrator | 2025-11-23 00:53:09 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:09.517716 | orchestrator | 2025-11-23 00:53:09 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:09.519722 | orchestrator | 2025-11-23 00:53:09 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:53:09.522989 | orchestrator | 2025-11-23 00:53:09 | INFO  | Task 5cfa77a8-4f7b-47fd-9950-571f9f932204 is in state SUCCESS 2025-11-23 00:53:09.523031 | orchestrator | 2025-11-23 00:53:09.524587 | orchestrator | 2025-11-23 00:53:09.524628 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-11-23 00:53:09.524641 | orchestrator | 2025-11-23 00:53:09.524653 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-23 00:53:09.524664 | orchestrator | Sunday 23 November 2025 00:50:14 +0000 (0:00:00.087) 0:00:00.087 ******* 2025-11-23 00:53:09.524675 | orchestrator | ok: [localhost] => { 2025-11-23 00:53:09.524688 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-11-23 00:53:09.524699 | orchestrator | } 2025-11-23 00:53:09.524711 | orchestrator | 2025-11-23 00:53:09.524722 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-11-23 00:53:09.524733 | orchestrator | Sunday 23 November 2025 00:50:15 +0000 (0:00:00.029) 0:00:00.116 ******* 2025-11-23 00:53:09.524745 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-11-23 00:53:09.524758 | orchestrator | ...ignoring 2025-11-23 00:53:09.524770 | orchestrator | 2025-11-23 00:53:09.524781 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-11-23 00:53:09.525078 | orchestrator | Sunday 23 November 2025 00:50:17 +0000 (0:00:02.729) 0:00:02.846 ******* 2025-11-23 00:53:09.525092 | orchestrator | skipping: [localhost] 2025-11-23 00:53:09.525103 | orchestrator | 2025-11-23 00:53:09.525114 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-11-23 00:53:09.525125 | orchestrator | Sunday 23 November 2025 00:50:17 +0000 (0:00:00.059) 0:00:02.905 ******* 2025-11-23 00:53:09.525136 | orchestrator | ok: [localhost] 2025-11-23 00:53:09.525148 | orchestrator | 2025-11-23 00:53:09.525158 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:53:09.525169 | orchestrator | 2025-11-23 00:53:09.525180 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:53:09.525191 | orchestrator | Sunday 23 November 2025 00:50:17 +0000 (0:00:00.157) 0:00:03.063 ******* 2025-11-23 00:53:09.525202 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.525213 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.525224 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.525235 | orchestrator | 2025-11-23 00:53:09.525246 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:53:09.525256 | orchestrator | Sunday 23 November 2025 00:50:18 +0000 (0:00:00.304) 0:00:03.367 ******* 2025-11-23 00:53:09.525267 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-11-23 00:53:09.525279 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-11-23 00:53:09.525289 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-11-23 00:53:09.525300 | orchestrator | 2025-11-23 00:53:09.525311 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-11-23 00:53:09.525322 | orchestrator | 2025-11-23 00:53:09.525333 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-11-23 00:53:09.525344 | orchestrator | Sunday 23 November 2025 00:50:18 +0000 (0:00:00.561) 0:00:03.929 ******* 2025-11-23 00:53:09.525388 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-23 00:53:09.525400 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-23 00:53:09.525411 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-23 00:53:09.525422 | orchestrator | 2025-11-23 00:53:09.525433 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-23 00:53:09.525444 | orchestrator | Sunday 23 November 2025 00:50:19 +0000 (0:00:00.384) 0:00:04.313 ******* 2025-11-23 00:53:09.525475 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:53:09.525488 | orchestrator | 2025-11-23 00:53:09.525498 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-11-23 00:53:09.525509 | orchestrator | Sunday 23 November 2025 00:50:19 +0000 (0:00:00.442) 0:00:04.756 ******* 2025-11-23 00:53:09.525554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.525572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.525591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.525613 | orchestrator | 2025-11-23 00:53:09.525632 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-11-23 00:53:09.525644 | orchestrator | Sunday 23 November 2025 00:50:22 +0000 (0:00:02.706) 0:00:07.463 ******* 2025-11-23 00:53:09.525655 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.525667 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.525678 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.525689 | orchestrator | 2025-11-23 00:53:09.525700 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-11-23 00:53:09.525713 | orchestrator | Sunday 23 November 2025 00:50:23 +0000 (0:00:00.660) 0:00:08.123 ******* 2025-11-23 00:53:09.525730 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.525749 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.525770 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.525790 | orchestrator | 2025-11-23 00:53:09.525809 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-11-23 00:53:09.525829 | orchestrator | Sunday 23 November 2025 00:50:24 +0000 (0:00:01.342) 0:00:09.466 ******* 2025-11-23 00:53:09.525849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.525905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.525931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.525968 | orchestrator | 2025-11-23 00:53:09.525987 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-11-23 00:53:09.526006 | orchestrator | Sunday 23 November 2025 00:50:27 +0000 (0:00:03.580) 0:00:13.046 ******* 2025-11-23 00:53:09.526133 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.526153 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.526168 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.526184 | orchestrator | 2025-11-23 00:53:09.526201 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-11-23 00:53:09.526217 | orchestrator | Sunday 23 November 2025 00:50:29 +0000 (0:00:01.062) 0:00:14.109 ******* 2025-11-23 00:53:09.526233 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:09.526251 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.526268 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:09.526287 | orchestrator | 2025-11-23 00:53:09.526303 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-23 00:53:09.526320 | orchestrator | Sunday 23 November 2025 00:50:32 +0000 (0:00:03.472) 0:00:17.581 ******* 2025-11-23 00:53:09.526337 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:53:09.526381 | orchestrator | 2025-11-23 00:53:09.526401 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-23 00:53:09.526419 | orchestrator | Sunday 23 November 2025 00:50:32 +0000 (0:00:00.463) 0:00:18.045 ******* 2025-11-23 00:53:09.526477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526502 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.526522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526557 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.526595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526617 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.526636 | orchestrator | 2025-11-23 00:53:09.526655 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-23 00:53:09.526675 | orchestrator | Sunday 23 November 2025 00:50:35 +0000 (0:00:02.940) 0:00:20.985 ******* 2025-11-23 00:53:09.526694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526723 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.526749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526767 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.526788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526818 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.526837 | orchestrator | 2025-11-23 00:53:09.526852 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-23 00:53:09.526869 | orchestrator | Sunday 23 November 2025 00:50:38 +0000 (0:00:03.014) 0:00:24.000 ******* 2025-11-23 00:53:09.526894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526912 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.526942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.526972 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.526992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-23 00:53:09.527018 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.527037 | orchestrator | 2025-11-23 00:53:09.527056 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-11-23 00:53:09.527074 | orchestrator | Sunday 23 November 2025 00:50:41 +0000 (0:00:02.392) 0:00:26.392 ******* 2025-11-23 00:53:09.527106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.527139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.527178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-23 00:53:09.527209 | orchestrator | 2025-11-23 00:53:09.527228 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-11-23 00:53:09.527247 | orchestrator | Sunday 23 November 2025 00:50:44 +0000 (0:00:03.397) 0:00:29.790 ******* 2025-11-23 00:53:09.527266 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.527284 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:09.527303 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:09.527321 | orchestrator | 2025-11-23 00:53:09.527340 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-11-23 00:53:09.527404 | orchestrator | Sunday 23 November 2025 00:50:45 +0000 (0:00:00.837) 0:00:30.627 ******* 2025-11-23 00:53:09.527424 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.527444 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.527463 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.527481 | orchestrator | 2025-11-23 00:53:09.527499 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-11-23 00:53:09.527516 | orchestrator | Sunday 23 November 2025 00:50:46 +0000 (0:00:00.610) 0:00:31.238 ******* 2025-11-23 00:53:09.527534 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.527557 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.527576 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.527595 | orchestrator | 2025-11-23 00:53:09.527614 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-11-23 00:53:09.527632 | orchestrator | Sunday 23 November 2025 00:50:46 +0000 (0:00:00.367) 0:00:31.605 ******* 2025-11-23 00:53:09.527652 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-11-23 00:53:09.527672 | orchestrator | ...ignoring 2025-11-23 00:53:09.527691 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-11-23 00:53:09.527710 | orchestrator | ...ignoring 2025-11-23 00:53:09.527729 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-11-23 00:53:09.527748 | orchestrator | ...ignoring 2025-11-23 00:53:09.527766 | orchestrator | 2025-11-23 00:53:09.527785 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-11-23 00:53:09.527804 | orchestrator | Sunday 23 November 2025 00:50:57 +0000 (0:00:10.852) 0:00:42.458 ******* 2025-11-23 00:53:09.527823 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.527842 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.527861 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.527880 | orchestrator | 2025-11-23 00:53:09.527898 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-11-23 00:53:09.527917 | orchestrator | Sunday 23 November 2025 00:50:57 +0000 (0:00:00.367) 0:00:42.825 ******* 2025-11-23 00:53:09.527936 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.527956 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.527974 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.527993 | orchestrator | 2025-11-23 00:53:09.528011 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-11-23 00:53:09.528042 | orchestrator | Sunday 23 November 2025 00:50:58 +0000 (0:00:00.575) 0:00:43.400 ******* 2025-11-23 00:53:09.528061 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.528080 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.528091 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.528102 | orchestrator | 2025-11-23 00:53:09.528113 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-11-23 00:53:09.528123 | orchestrator | Sunday 23 November 2025 00:50:58 +0000 (0:00:00.394) 0:00:43.795 ******* 2025-11-23 00:53:09.528134 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.528152 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.528163 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.528174 | orchestrator | 2025-11-23 00:53:09.528190 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-11-23 00:53:09.528209 | orchestrator | Sunday 23 November 2025 00:50:59 +0000 (0:00:00.359) 0:00:44.155 ******* 2025-11-23 00:53:09.528227 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.528244 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.528261 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.528277 | orchestrator | 2025-11-23 00:53:09.528295 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-11-23 00:53:09.528315 | orchestrator | Sunday 23 November 2025 00:50:59 +0000 (0:00:00.395) 0:00:44.551 ******* 2025-11-23 00:53:09.528345 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.528393 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.528406 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.528416 | orchestrator | 2025-11-23 00:53:09.528427 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-23 00:53:09.528438 | orchestrator | Sunday 23 November 2025 00:50:59 +0000 (0:00:00.531) 0:00:45.082 ******* 2025-11-23 00:53:09.528448 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.528459 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.528470 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-11-23 00:53:09.528481 | orchestrator | 2025-11-23 00:53:09.528491 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-11-23 00:53:09.528502 | orchestrator | Sunday 23 November 2025 00:51:00 +0000 (0:00:00.356) 0:00:45.439 ******* 2025-11-23 00:53:09.528513 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.528523 | orchestrator | 2025-11-23 00:53:09.528534 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-11-23 00:53:09.528544 | orchestrator | Sunday 23 November 2025 00:51:10 +0000 (0:00:09.844) 0:00:55.283 ******* 2025-11-23 00:53:09.528555 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.528566 | orchestrator | 2025-11-23 00:53:09.528576 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-23 00:53:09.528587 | orchestrator | Sunday 23 November 2025 00:51:10 +0000 (0:00:00.130) 0:00:55.414 ******* 2025-11-23 00:53:09.528598 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.528608 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.528619 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.528629 | orchestrator | 2025-11-23 00:53:09.528640 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-11-23 00:53:09.528651 | orchestrator | Sunday 23 November 2025 00:51:11 +0000 (0:00:00.833) 0:00:56.247 ******* 2025-11-23 00:53:09.528661 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.528672 | orchestrator | 2025-11-23 00:53:09.528682 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-11-23 00:53:09.528700 | orchestrator | Sunday 23 November 2025 00:51:18 +0000 (0:00:06.974) 0:01:03.221 ******* 2025-11-23 00:53:09.528719 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.528738 | orchestrator | 2025-11-23 00:53:09.528755 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-11-23 00:53:09.528774 | orchestrator | Sunday 23 November 2025 00:51:19 +0000 (0:00:01.663) 0:01:04.885 ******* 2025-11-23 00:53:09.528807 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.528827 | orchestrator | 2025-11-23 00:53:09.528845 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-11-23 00:53:09.528864 | orchestrator | Sunday 23 November 2025 00:51:22 +0000 (0:00:02.243) 0:01:07.129 ******* 2025-11-23 00:53:09.528881 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.528898 | orchestrator | 2025-11-23 00:53:09.528915 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-11-23 00:53:09.528933 | orchestrator | Sunday 23 November 2025 00:51:22 +0000 (0:00:00.117) 0:01:07.246 ******* 2025-11-23 00:53:09.528949 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.528967 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.528983 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.528999 | orchestrator | 2025-11-23 00:53:09.529015 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-11-23 00:53:09.529032 | orchestrator | Sunday 23 November 2025 00:51:22 +0000 (0:00:00.272) 0:01:07.519 ******* 2025-11-23 00:53:09.529050 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.529067 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-11-23 00:53:09.529083 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:09.529100 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:09.529117 | orchestrator | 2025-11-23 00:53:09.529133 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-11-23 00:53:09.529150 | orchestrator | skipping: no hosts matched 2025-11-23 00:53:09.529166 | orchestrator | 2025-11-23 00:53:09.529183 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-23 00:53:09.529200 | orchestrator | 2025-11-23 00:53:09.529216 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-23 00:53:09.529232 | orchestrator | Sunday 23 November 2025 00:51:22 +0000 (0:00:00.441) 0:01:07.961 ******* 2025-11-23 00:53:09.529249 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:09.529266 | orchestrator | 2025-11-23 00:53:09.529282 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-23 00:53:09.529298 | orchestrator | Sunday 23 November 2025 00:51:39 +0000 (0:00:16.826) 0:01:24.788 ******* 2025-11-23 00:53:09.529314 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.529331 | orchestrator | 2025-11-23 00:53:09.529347 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-23 00:53:09.529388 | orchestrator | Sunday 23 November 2025 00:51:56 +0000 (0:00:16.704) 0:01:41.492 ******* 2025-11-23 00:53:09.529405 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.529421 | orchestrator | 2025-11-23 00:53:09.529438 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-23 00:53:09.529455 | orchestrator | 2025-11-23 00:53:09.529471 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-23 00:53:09.529487 | orchestrator | Sunday 23 November 2025 00:51:58 +0000 (0:00:02.251) 0:01:43.744 ******* 2025-11-23 00:53:09.529517 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:09.529534 | orchestrator | 2025-11-23 00:53:09.529550 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-23 00:53:09.529567 | orchestrator | Sunday 23 November 2025 00:52:15 +0000 (0:00:16.883) 0:02:00.628 ******* 2025-11-23 00:53:09.529582 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.529592 | orchestrator | 2025-11-23 00:53:09.529601 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-23 00:53:09.529611 | orchestrator | Sunday 23 November 2025 00:52:32 +0000 (0:00:16.534) 0:02:17.162 ******* 2025-11-23 00:53:09.529621 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.529630 | orchestrator | 2025-11-23 00:53:09.529640 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-11-23 00:53:09.529650 | orchestrator | 2025-11-23 00:53:09.529667 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-23 00:53:09.529677 | orchestrator | Sunday 23 November 2025 00:52:34 +0000 (0:00:02.304) 0:02:19.467 ******* 2025-11-23 00:53:09.529694 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.529704 | orchestrator | 2025-11-23 00:53:09.529714 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-23 00:53:09.529723 | orchestrator | Sunday 23 November 2025 00:52:51 +0000 (0:00:16.648) 0:02:36.116 ******* 2025-11-23 00:53:09.529733 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.529742 | orchestrator | 2025-11-23 00:53:09.529752 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-23 00:53:09.529761 | orchestrator | Sunday 23 November 2025 00:52:51 +0000 (0:00:00.544) 0:02:36.660 ******* 2025-11-23 00:53:09.529771 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.529780 | orchestrator | 2025-11-23 00:53:09.529790 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-11-23 00:53:09.529799 | orchestrator | 2025-11-23 00:53:09.529809 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-11-23 00:53:09.529818 | orchestrator | Sunday 23 November 2025 00:52:53 +0000 (0:00:02.349) 0:02:39.009 ******* 2025-11-23 00:53:09.529828 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:53:09.529837 | orchestrator | 2025-11-23 00:53:09.529846 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-11-23 00:53:09.529856 | orchestrator | Sunday 23 November 2025 00:52:54 +0000 (0:00:00.463) 0:02:39.473 ******* 2025-11-23 00:53:09.529865 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.529875 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.529884 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.529894 | orchestrator | 2025-11-23 00:53:09.529903 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-11-23 00:53:09.529913 | orchestrator | Sunday 23 November 2025 00:52:56 +0000 (0:00:02.432) 0:02:41.905 ******* 2025-11-23 00:53:09.529922 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.529932 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.529941 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.529951 | orchestrator | 2025-11-23 00:53:09.529960 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-11-23 00:53:09.529970 | orchestrator | Sunday 23 November 2025 00:52:59 +0000 (0:00:02.273) 0:02:44.178 ******* 2025-11-23 00:53:09.529979 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.529989 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.529998 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.530008 | orchestrator | 2025-11-23 00:53:09.530050 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-11-23 00:53:09.530062 | orchestrator | Sunday 23 November 2025 00:53:01 +0000 (0:00:02.332) 0:02:46.511 ******* 2025-11-23 00:53:09.530071 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.530081 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.530090 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:09.530100 | orchestrator | 2025-11-23 00:53:09.530110 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-11-23 00:53:09.530120 | orchestrator | Sunday 23 November 2025 00:53:03 +0000 (0:00:02.305) 0:02:48.816 ******* 2025-11-23 00:53:09.530129 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:09.530139 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:09.530148 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:09.530158 | orchestrator | 2025-11-23 00:53:09.530167 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-11-23 00:53:09.530177 | orchestrator | Sunday 23 November 2025 00:53:06 +0000 (0:00:02.669) 0:02:51.485 ******* 2025-11-23 00:53:09.530186 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:09.530196 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:09.530206 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:09.530215 | orchestrator | 2025-11-23 00:53:09.530225 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:53:09.530242 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-23 00:53:09.530253 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-11-23 00:53:09.530264 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-23 00:53:09.530274 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-23 00:53:09.530284 | orchestrator | 2025-11-23 00:53:09.530294 | orchestrator | 2025-11-23 00:53:09.530303 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:53:09.530313 | orchestrator | Sunday 23 November 2025 00:53:06 +0000 (0:00:00.203) 0:02:51.689 ******* 2025-11-23 00:53:09.530322 | orchestrator | =============================================================================== 2025-11-23 00:53:09.530337 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.71s 2025-11-23 00:53:09.530347 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.24s 2025-11-23 00:53:09.530378 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.65s 2025-11-23 00:53:09.530388 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.85s 2025-11-23 00:53:09.530398 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.84s 2025-11-23 00:53:09.530408 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.97s 2025-11-23 00:53:09.530424 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.56s 2025-11-23 00:53:09.530434 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.58s 2025-11-23 00:53:09.530443 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.47s 2025-11-23 00:53:09.530453 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.40s 2025-11-23 00:53:09.530463 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.01s 2025-11-23 00:53:09.530472 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.94s 2025-11-23 00:53:09.530482 | orchestrator | Check MariaDB service --------------------------------------------------- 2.73s 2025-11-23 00:53:09.530492 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.71s 2025-11-23 00:53:09.530501 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.67s 2025-11-23 00:53:09.530511 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.43s 2025-11-23 00:53:09.530520 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.39s 2025-11-23 00:53:09.530530 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.35s 2025-11-23 00:53:09.530540 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.33s 2025-11-23 00:53:09.530549 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.31s 2025-11-23 00:53:09.530559 | orchestrator | 2025-11-23 00:53:09 | INFO  | Task 1eb8aa6e-41e8-4797-baa3-e02b7a1d3989 is in state STARTED 2025-11-23 00:53:09.530569 | orchestrator | 2025-11-23 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:12.573431 | orchestrator | 2025-11-23 00:53:12 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:12.575416 | orchestrator | 2025-11-23 00:53:12 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:12.577382 | orchestrator | 2025-11-23 00:53:12 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:53:12.579846 | orchestrator | 2025-11-23 00:53:12 | INFO  | Task 1eb8aa6e-41e8-4797-baa3-e02b7a1d3989 is in state STARTED 2025-11-23 00:53:12.579875 | orchestrator | 2025-11-23 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:15.624094 | orchestrator | 2025-11-23 00:53:15 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:15.625211 | orchestrator | 2025-11-23 00:53:15 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:15.628720 | orchestrator | 2025-11-23 00:53:15 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state STARTED 2025-11-23 00:53:15.630122 | orchestrator | 2025-11-23 00:53:15 | INFO  | Task 1eb8aa6e-41e8-4797-baa3-e02b7a1d3989 is in state STARTED 2025-11-23 00:53:15.630157 | orchestrator | 2025-11-23 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:18.671554 | orchestrator | 2025-11-23 00:53:18 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:18.673015 | orchestrator | 2025-11-23 00:53:18 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:18.675034 | orchestrator | 2025-11-23 00:53:18 | INFO  | Task 72c82a38-1f30-4d33-9ffe-fab2e0b52c0a is in state SUCCESS 2025-11-23 00:53:18.676756 | orchestrator | 2025-11-23 00:53:18.676785 | orchestrator | 2025-11-23 00:53:18.676791 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:53:18.676798 | orchestrator | 2025-11-23 00:53:18.676804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:53:18.676914 | orchestrator | Sunday 23 November 2025 00:50:14 +0000 (0:00:00.186) 0:00:00.186 ******* 2025-11-23 00:53:18.676921 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:18.676928 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:18.676934 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:18.676940 | orchestrator | 2025-11-23 00:53:18.676947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:53:18.676953 | orchestrator | Sunday 23 November 2025 00:50:15 +0000 (0:00:00.242) 0:00:00.428 ******* 2025-11-23 00:53:18.676960 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-11-23 00:53:18.676966 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-11-23 00:53:18.676973 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-11-23 00:53:18.676979 | orchestrator | 2025-11-23 00:53:18.676985 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-11-23 00:53:18.676991 | orchestrator | 2025-11-23 00:53:18.677009 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-23 00:53:18.677016 | orchestrator | Sunday 23 November 2025 00:50:15 +0000 (0:00:00.400) 0:00:00.829 ******* 2025-11-23 00:53:18.677022 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:53:18.677028 | orchestrator | 2025-11-23 00:53:18.677034 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-11-23 00:53:18.677040 | orchestrator | Sunday 23 November 2025 00:50:15 +0000 (0:00:00.442) 0:00:01.271 ******* 2025-11-23 00:53:18.677046 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-23 00:53:18.677052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-23 00:53:18.677058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-23 00:53:18.677064 | orchestrator | 2025-11-23 00:53:18.677070 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-11-23 00:53:18.677076 | orchestrator | Sunday 23 November 2025 00:50:16 +0000 (0:00:00.664) 0:00:01.935 ******* 2025-11-23 00:53:18.677085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677269 | orchestrator | 2025-11-23 00:53:18.677276 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-23 00:53:18.677283 | orchestrator | Sunday 23 November 2025 00:50:18 +0000 (0:00:01.483) 0:00:03.419 ******* 2025-11-23 00:53:18.677290 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:53:18.677297 | orchestrator | 2025-11-23 00:53:18.677304 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-11-23 00:53:18.677310 | orchestrator | Sunday 23 November 2025 00:50:18 +0000 (0:00:00.487) 0:00:03.907 ******* 2025-11-23 00:53:18.677325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677422 | orchestrator | 2025-11-23 00:53:18.677428 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-11-23 00:53:18.677439 | orchestrator | Sunday 23 November 2025 00:50:20 +0000 (0:00:02.361) 0:00:06.269 ******* 2025-11-23 00:53:18.677445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:53:18.677452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:53:18.677459 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:18.677465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:53:18.677479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:53:18.677491 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:18.677498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:53:18.677505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:53:18.677511 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:18.677518 | orchestrator | 2025-11-23 00:53:18.677524 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-11-23 00:53:18.677530 | orchestrator | Sunday 23 November 2025 00:50:21 +0000 (0:00:00.962) 0:00:07.231 ******* 2025-11-23 00:53:18.677536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:53:18.677550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:53:18.677561 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:18.677568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:53:18.677575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:53:18.677581 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:18.677588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-23 00:53:18.677600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-23 00:53:18.677611 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:18.677617 | orchestrator | 2025-11-23 00:53:18.677629 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-11-23 00:53:18.677636 | orchestrator | Sunday 23 November 2025 00:50:22 +0000 (0:00:00.732) 0:00:07.963 ******* 2025-11-23 00:53:18.677642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677695 | orchestrator | 2025-11-23 00:53:18.677701 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-11-23 00:53:18.677707 | orchestrator | Sunday 23 November 2025 00:50:25 +0000 (0:00:02.419) 0:00:10.382 ******* 2025-11-23 00:53:18.677713 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:18.677720 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:18.677726 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:18.677732 | orchestrator | 2025-11-23 00:53:18.677738 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-11-23 00:53:18.677744 | orchestrator | Sunday 23 November 2025 00:50:27 +0000 (0:00:02.908) 0:00:13.291 ******* 2025-11-23 00:53:18.677750 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:18.677756 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:18.677762 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:18.677768 | orchestrator | 2025-11-23 00:53:18.677774 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-11-23 00:53:18.677781 | orchestrator | Sunday 23 November 2025 00:50:29 +0000 (0:00:01.606) 0:00:14.897 ******* 2025-11-23 00:53:18.677787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-23 00:53:18.677819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-23 00:53:18.677848 | orchestrator | 2025-11-23 00:53:18.677856 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-23 00:53:18.677863 | orchestrator | Sunday 23 November 2025 00:50:31 +0000 (0:00:02.114) 0:00:17.011 ******* 2025-11-23 00:53:18.677870 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:18.677877 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:18.677884 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:18.677891 | orchestrator | 2025-11-23 00:53:18.677898 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-23 00:53:18.677905 | orchestrator | Sunday 23 November 2025 00:50:31 +0000 (0:00:00.255) 0:00:17.267 ******* 2025-11-23 00:53:18.677912 | orchestrator | 2025-11-23 00:53:18.677919 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-23 00:53:18.677926 | orchestrator | Sunday 23 November 2025 00:50:31 +0000 (0:00:00.059) 0:00:17.327 ******* 2025-11-23 00:53:18.677933 | orchestrator | 2025-11-23 00:53:18.677939 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-23 00:53:18.677946 | orchestrator | Sunday 23 November 2025 00:50:32 +0000 (0:00:00.060) 0:00:17.387 ******* 2025-11-23 00:53:18.677953 | orchestrator | 2025-11-23 00:53:18.677960 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-11-23 00:53:18.677967 | orchestrator | Sunday 23 November 2025 00:50:32 +0000 (0:00:00.060) 0:00:17.447 ******* 2025-11-23 00:53:18.677974 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:18.677981 | orchestrator | 2025-11-23 00:53:18.677988 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-11-23 00:53:18.677994 | orchestrator | Sunday 23 November 2025 00:50:32 +0000 (0:00:00.198) 0:00:17.646 ******* 2025-11-23 00:53:18.678000 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:18.678006 | orchestrator | 2025-11-23 00:53:18.678045 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-11-23 00:53:18.678054 | orchestrator | Sunday 23 November 2025 00:50:32 +0000 (0:00:00.452) 0:00:18.099 ******* 2025-11-23 00:53:18.678060 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:18.678066 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:18.678072 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:18.678078 | orchestrator | 2025-11-23 00:53:18.678084 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-11-23 00:53:18.678090 | orchestrator | Sunday 23 November 2025 00:51:40 +0000 (0:01:07.835) 0:01:25.934 ******* 2025-11-23 00:53:18.678096 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:18.678102 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:18.678108 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:18.678114 | orchestrator | 2025-11-23 00:53:18.678120 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-23 00:53:18.678126 | orchestrator | Sunday 23 November 2025 00:53:06 +0000 (0:01:25.936) 0:02:51.871 ******* 2025-11-23 00:53:18.678132 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:53:18.678143 | orchestrator | 2025-11-23 00:53:18.678149 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-11-23 00:53:18.678155 | orchestrator | Sunday 23 November 2025 00:53:07 +0000 (0:00:00.601) 0:02:52.473 ******* 2025-11-23 00:53:18.678161 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:18.678167 | orchestrator | 2025-11-23 00:53:18.678173 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-11-23 00:53:18.678179 | orchestrator | Sunday 23 November 2025 00:53:09 +0000 (0:00:02.488) 0:02:54.962 ******* 2025-11-23 00:53:18.678185 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:18.678191 | orchestrator | 2025-11-23 00:53:18.678197 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-11-23 00:53:18.678204 | orchestrator | Sunday 23 November 2025 00:53:12 +0000 (0:00:02.406) 0:02:57.368 ******* 2025-11-23 00:53:18.678210 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:18.678216 | orchestrator | 2025-11-23 00:53:18.678222 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-11-23 00:53:18.678228 | orchestrator | Sunday 23 November 2025 00:53:14 +0000 (0:00:02.827) 0:03:00.195 ******* 2025-11-23 00:53:18.678234 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:18.678240 | orchestrator | 2025-11-23 00:53:18.678246 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:53:18.678252 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 00:53:18.678260 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-23 00:53:18.678266 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-23 00:53:18.678272 | orchestrator | 2025-11-23 00:53:18.678278 | orchestrator | 2025-11-23 00:53:18.678284 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:53:18.678294 | orchestrator | Sunday 23 November 2025 00:53:17 +0000 (0:00:02.640) 0:03:02.835 ******* 2025-11-23 00:53:18.678300 | orchestrator | =============================================================================== 2025-11-23 00:53:18.678377 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.94s 2025-11-23 00:53:18.678390 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.84s 2025-11-23 00:53:18.678397 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.91s 2025-11-23 00:53:18.678403 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.83s 2025-11-23 00:53:18.678409 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.64s 2025-11-23 00:53:18.678415 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.49s 2025-11-23 00:53:18.678421 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.42s 2025-11-23 00:53:18.678426 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.41s 2025-11-23 00:53:18.678435 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.36s 2025-11-23 00:53:18.678441 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2025-11-23 00:53:18.678447 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.61s 2025-11-23 00:53:18.678453 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.48s 2025-11-23 00:53:18.678459 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.96s 2025-11-23 00:53:18.678465 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.73s 2025-11-23 00:53:18.678471 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.66s 2025-11-23 00:53:18.678477 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2025-11-23 00:53:18.678488 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-11-23 00:53:18.678495 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.45s 2025-11-23 00:53:18.678501 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-11-23 00:53:18.678506 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-11-23 00:53:18.678513 | orchestrator | 2025-11-23 00:53:18 | INFO  | Task 1eb8aa6e-41e8-4797-baa3-e02b7a1d3989 is in state STARTED 2025-11-23 00:53:18.678519 | orchestrator | 2025-11-23 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:21.705154 | orchestrator | 2025-11-23 00:53:21 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:21.705831 | orchestrator | 2025-11-23 00:53:21 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:21.706685 | orchestrator | 2025-11-23 00:53:21 | INFO  | Task 1eb8aa6e-41e8-4797-baa3-e02b7a1d3989 is in state STARTED 2025-11-23 00:53:21.707309 | orchestrator | 2025-11-23 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:24.737902 | orchestrator | 2025-11-23 00:53:24 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:24.738416 | orchestrator | 2025-11-23 00:53:24 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:24.739324 | orchestrator | 2025-11-23 00:53:24 | INFO  | Task 1eb8aa6e-41e8-4797-baa3-e02b7a1d3989 is in state STARTED 2025-11-23 00:53:24.739470 | orchestrator | 2025-11-23 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:27.771031 | orchestrator | 2025-11-23 00:53:27 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:27.771497 | orchestrator | 2025-11-23 00:53:27 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:27.773336 | orchestrator | 2025-11-23 00:53:27 | INFO  | Task 1eb8aa6e-41e8-4797-baa3-e02b7a1d3989 is in state SUCCESS 2025-11-23 00:53:27.773440 | orchestrator | 2025-11-23 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:27.775162 | orchestrator | 2025-11-23 00:53:27.775200 | orchestrator | 2025-11-23 00:53:27.775212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:53:27.775224 | orchestrator | 2025-11-23 00:53:27.775236 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:53:27.775247 | orchestrator | Sunday 23 November 2025 00:53:10 +0000 (0:00:00.234) 0:00:00.234 ******* 2025-11-23 00:53:27.775258 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.775270 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.775280 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.775291 | orchestrator | 2025-11-23 00:53:27.775302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:53:27.775313 | orchestrator | Sunday 23 November 2025 00:53:10 +0000 (0:00:00.248) 0:00:00.483 ******* 2025-11-23 00:53:27.775323 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-11-23 00:53:27.775335 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-11-23 00:53:27.775382 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-11-23 00:53:27.775394 | orchestrator | 2025-11-23 00:53:27.775405 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-11-23 00:53:27.775416 | orchestrator | 2025-11-23 00:53:27.775427 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-23 00:53:27.775438 | orchestrator | Sunday 23 November 2025 00:53:11 +0000 (0:00:00.378) 0:00:00.861 ******* 2025-11-23 00:53:27.775449 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:53:27.775487 | orchestrator | 2025-11-23 00:53:27.775499 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-11-23 00:53:27.775510 | orchestrator | Sunday 23 November 2025 00:53:11 +0000 (0:00:00.458) 0:00:01.319 ******* 2025-11-23 00:53:27.775545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-23 00:53:27.775578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-23 00:53:27.775609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-23 00:53:27.775623 | orchestrator | 2025-11-23 00:53:27.775634 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-11-23 00:53:27.775645 | orchestrator | Sunday 23 November 2025 00:53:12 +0000 (0:00:01.245) 0:00:02.565 ******* 2025-11-23 00:53:27.775656 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.775666 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.775677 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.775688 | orchestrator | 2025-11-23 00:53:27.775699 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-23 00:53:27.775709 | orchestrator | Sunday 23 November 2025 00:53:13 +0000 (0:00:00.366) 0:00:02.932 ******* 2025-11-23 00:53:27.775720 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-23 00:53:27.775737 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-23 00:53:27.775749 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-11-23 00:53:27.775759 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-11-23 00:53:27.775770 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-11-23 00:53:27.775780 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-11-23 00:53:27.775800 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-11-23 00:53:27.775810 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-11-23 00:53:27.775821 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-23 00:53:27.775831 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-23 00:53:27.775842 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-11-23 00:53:27.775852 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-11-23 00:53:27.775878 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-11-23 00:53:27.775900 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-11-23 00:53:27.775911 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-11-23 00:53:27.775922 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-11-23 00:53:27.775932 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-23 00:53:27.775943 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-23 00:53:27.775953 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-11-23 00:53:27.775969 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-11-23 00:53:27.775980 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-11-23 00:53:27.775990 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-11-23 00:53:27.776001 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-11-23 00:53:27.776011 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-11-23 00:53:27.776023 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-11-23 00:53:27.776035 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-11-23 00:53:27.776046 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-11-23 00:53:27.776057 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-11-23 00:53:27.776068 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-11-23 00:53:27.776079 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-11-23 00:53:27.776089 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-11-23 00:53:27.776100 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-11-23 00:53:27.776111 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-11-23 00:53:27.776122 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-11-23 00:53:27.776133 | orchestrator | 2025-11-23 00:53:27.776144 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.776165 | orchestrator | Sunday 23 November 2025 00:53:13 +0000 (0:00:00.629) 0:00:03.562 ******* 2025-11-23 00:53:27.776176 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.776186 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.776197 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.776208 | orchestrator | 2025-11-23 00:53:27.776218 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.776229 | orchestrator | Sunday 23 November 2025 00:53:14 +0000 (0:00:00.311) 0:00:03.873 ******* 2025-11-23 00:53:27.776240 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776251 | orchestrator | 2025-11-23 00:53:27.776267 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.776278 | orchestrator | Sunday 23 November 2025 00:53:14 +0000 (0:00:00.126) 0:00:04.000 ******* 2025-11-23 00:53:27.776289 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776300 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.776310 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.776321 | orchestrator | 2025-11-23 00:53:27.776332 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.776385 | orchestrator | Sunday 23 November 2025 00:53:14 +0000 (0:00:00.392) 0:00:04.393 ******* 2025-11-23 00:53:27.776407 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.776426 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.776441 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.776452 | orchestrator | 2025-11-23 00:53:27.776462 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.776473 | orchestrator | Sunday 23 November 2025 00:53:15 +0000 (0:00:00.304) 0:00:04.697 ******* 2025-11-23 00:53:27.776484 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776494 | orchestrator | 2025-11-23 00:53:27.776505 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.776515 | orchestrator | Sunday 23 November 2025 00:53:15 +0000 (0:00:00.113) 0:00:04.811 ******* 2025-11-23 00:53:27.776526 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776537 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.776547 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.776558 | orchestrator | 2025-11-23 00:53:27.776569 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.776579 | orchestrator | Sunday 23 November 2025 00:53:15 +0000 (0:00:00.277) 0:00:05.089 ******* 2025-11-23 00:53:27.776590 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.776600 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.776611 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.776621 | orchestrator | 2025-11-23 00:53:27.776632 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.776643 | orchestrator | Sunday 23 November 2025 00:53:15 +0000 (0:00:00.284) 0:00:05.373 ******* 2025-11-23 00:53:27.776653 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776664 | orchestrator | 2025-11-23 00:53:27.776674 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.776691 | orchestrator | Sunday 23 November 2025 00:53:15 +0000 (0:00:00.236) 0:00:05.610 ******* 2025-11-23 00:53:27.776702 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776713 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.776724 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.776734 | orchestrator | 2025-11-23 00:53:27.776745 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.776755 | orchestrator | Sunday 23 November 2025 00:53:16 +0000 (0:00:00.320) 0:00:05.931 ******* 2025-11-23 00:53:27.776766 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.776777 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.776787 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.776798 | orchestrator | 2025-11-23 00:53:27.776808 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.776827 | orchestrator | Sunday 23 November 2025 00:53:16 +0000 (0:00:00.285) 0:00:06.216 ******* 2025-11-23 00:53:27.776838 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776848 | orchestrator | 2025-11-23 00:53:27.776859 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.776870 | orchestrator | Sunday 23 November 2025 00:53:16 +0000 (0:00:00.143) 0:00:06.360 ******* 2025-11-23 00:53:27.776880 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.776891 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.776901 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.776912 | orchestrator | 2025-11-23 00:53:27.776923 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.776933 | orchestrator | Sunday 23 November 2025 00:53:16 +0000 (0:00:00.282) 0:00:06.642 ******* 2025-11-23 00:53:27.776944 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.776954 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.776965 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.776975 | orchestrator | 2025-11-23 00:53:27.776986 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.776997 | orchestrator | Sunday 23 November 2025 00:53:17 +0000 (0:00:00.418) 0:00:07.061 ******* 2025-11-23 00:53:27.777007 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777018 | orchestrator | 2025-11-23 00:53:27.777029 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.777039 | orchestrator | Sunday 23 November 2025 00:53:17 +0000 (0:00:00.129) 0:00:07.190 ******* 2025-11-23 00:53:27.777050 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777061 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.777071 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.777082 | orchestrator | 2025-11-23 00:53:27.777093 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.777103 | orchestrator | Sunday 23 November 2025 00:53:17 +0000 (0:00:00.267) 0:00:07.457 ******* 2025-11-23 00:53:27.777114 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.777124 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.777135 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.777145 | orchestrator | 2025-11-23 00:53:27.777156 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.777167 | orchestrator | Sunday 23 November 2025 00:53:18 +0000 (0:00:00.278) 0:00:07.736 ******* 2025-11-23 00:53:27.777177 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777188 | orchestrator | 2025-11-23 00:53:27.777199 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.777209 | orchestrator | Sunday 23 November 2025 00:53:18 +0000 (0:00:00.106) 0:00:07.843 ******* 2025-11-23 00:53:27.777220 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777231 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.777241 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.777252 | orchestrator | 2025-11-23 00:53:27.777263 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.777280 | orchestrator | Sunday 23 November 2025 00:53:18 +0000 (0:00:00.255) 0:00:08.098 ******* 2025-11-23 00:53:27.777291 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.777301 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.777312 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.777322 | orchestrator | 2025-11-23 00:53:27.777333 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.777370 | orchestrator | Sunday 23 November 2025 00:53:18 +0000 (0:00:00.421) 0:00:08.520 ******* 2025-11-23 00:53:27.777383 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777393 | orchestrator | 2025-11-23 00:53:27.777404 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.777415 | orchestrator | Sunday 23 November 2025 00:53:18 +0000 (0:00:00.123) 0:00:08.643 ******* 2025-11-23 00:53:27.777425 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777443 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.777454 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.777465 | orchestrator | 2025-11-23 00:53:27.777475 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.777486 | orchestrator | Sunday 23 November 2025 00:53:19 +0000 (0:00:00.259) 0:00:08.903 ******* 2025-11-23 00:53:27.777497 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.777507 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.777518 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.777529 | orchestrator | 2025-11-23 00:53:27.777540 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.777550 | orchestrator | Sunday 23 November 2025 00:53:19 +0000 (0:00:00.266) 0:00:09.169 ******* 2025-11-23 00:53:27.777561 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777572 | orchestrator | 2025-11-23 00:53:27.777582 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.777593 | orchestrator | Sunday 23 November 2025 00:53:19 +0000 (0:00:00.114) 0:00:09.284 ******* 2025-11-23 00:53:27.777603 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777640 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.777651 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.777662 | orchestrator | 2025-11-23 00:53:27.777672 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.777683 | orchestrator | Sunday 23 November 2025 00:53:19 +0000 (0:00:00.258) 0:00:09.542 ******* 2025-11-23 00:53:27.777694 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.777704 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.777720 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.777731 | orchestrator | 2025-11-23 00:53:27.777742 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.777752 | orchestrator | Sunday 23 November 2025 00:53:20 +0000 (0:00:00.414) 0:00:09.957 ******* 2025-11-23 00:53:27.777763 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777774 | orchestrator | 2025-11-23 00:53:27.777784 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.777795 | orchestrator | Sunday 23 November 2025 00:53:20 +0000 (0:00:00.107) 0:00:10.065 ******* 2025-11-23 00:53:27.777806 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777816 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.777827 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.777837 | orchestrator | 2025-11-23 00:53:27.777848 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-23 00:53:27.777859 | orchestrator | Sunday 23 November 2025 00:53:20 +0000 (0:00:00.259) 0:00:10.324 ******* 2025-11-23 00:53:27.777869 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:53:27.777880 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:53:27.777890 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:53:27.777901 | orchestrator | 2025-11-23 00:53:27.777912 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-23 00:53:27.777922 | orchestrator | Sunday 23 November 2025 00:53:20 +0000 (0:00:00.278) 0:00:10.602 ******* 2025-11-23 00:53:27.777933 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777943 | orchestrator | 2025-11-23 00:53:27.777954 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-23 00:53:27.777964 | orchestrator | Sunday 23 November 2025 00:53:21 +0000 (0:00:00.130) 0:00:10.733 ******* 2025-11-23 00:53:27.777975 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:53:27.777986 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:53:27.777996 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:53:27.778007 | orchestrator | 2025-11-23 00:53:27.778073 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-11-23 00:53:27.778084 | orchestrator | Sunday 23 November 2025 00:53:21 +0000 (0:00:00.383) 0:00:11.117 ******* 2025-11-23 00:53:27.778095 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:53:27.778106 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:53:27.778124 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:53:27.778134 | orchestrator | 2025-11-23 00:53:27.778145 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-11-23 00:53:27.778156 | orchestrator | Sunday 23 November 2025 00:53:22 +0000 (0:00:01.552) 0:00:12.669 ******* 2025-11-23 00:53:27.778166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-23 00:53:27.778177 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-23 00:53:27.778188 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-23 00:53:27.778198 | orchestrator | 2025-11-23 00:53:27.778209 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-11-23 00:53:27.778220 | orchestrator | Sunday 23 November 2025 00:53:24 +0000 (0:00:01.554) 0:00:14.224 ******* 2025-11-23 00:53:27.778232 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: . Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'. 2025-11-23 00:53:27.778304 | orchestrator | failed: [testbed-node-0] (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) => {"ansible_loop_var": "item", "changed": false, "item": "/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2", "msg": "AnsibleError: template error while templating string: Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'.. String: DEBUG = {{ horizon_logging_debug }}\nTEMPLATE_DEBUG = DEBUG\nCOMPRESS_OFFLINE = True\nWEBROOT = '/'\nALLOWED_HOSTS = ['*']\n\n{% if horizon_backend_database | bool %}\nSESSION_ENGINE = 'django.contrib.sessions.backends.db'\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': '{{ horizon_database_name }}',\n 'USER': '{{ horizon_database_user }}',\n 'PASSWORD': '{{ horizon_database_password }}',\n 'HOST': '{{ database_address }}',\n 'PORT': '{{ database_port }}'\n }\n}\n{% elif groups['memcached'] | length > 0 and not horizon_backend_database | bool %}\nSESSION_ENGINE = 'django.contrib.sessions.backends.cache'\n\n{% if groups['memcached'] | length > 0 %}\nCACHES['default']['LOCATION'] = [{% for host in groups['memcached'] %}'{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ memcached_port }}'{% if not loop.last %},{% endif %}{% endfor %}]\nCACHES['default']['OPTIONS'] = {'ignore_exc': True}\nCACHES['default']['OPTIONS'] = {\n \"no_delay\": True,\n \"ignore_exc\": True,\n \"max_pool_size\": 4,\n \"use_pooling\": True,\n}\n{% endif %}\n\n{% if kolla_enable_tls_external | bool or kolla_enable_tls_internal | bool %}\nSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\nCSRF_COOKIE_SECURE = True\nSESSION_COOKIE_SECURE = True\n{% endif %}\n\nOPENSTACK_API_VERSIONS = {\n \"identity\": 3,\n}\n\nOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = {{ horizon_keystone_multidomain | bool }}\nOPENSTACK_KEYSTONE_DOMAIN_DROPDOWN = {{ 'True' if horizon_keystone_domain_choices|length > 1 else 'False' }}\nOPENSTACK_KEYSTONE_DOMAIN_CHOICES = (\n{% for key, value in horizon_keystone_domain_choices.items() %}\n ('{{ key }}', '{{ value }}'),\n{% endfor %}\n)\n\nLOCAL_PATH = '/tmp'\nSECRET_KEY='{{ horizon_secret_key }}'\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n{% if multiple_regions_names|length > 1 %}\nAVAILABLE_REGIONS = [\n{% for region_name in multiple_regions_names %}\n ('{{ keystone_internal_url }}', '{{ region_name }}'),\n{% endfor %}\n]\n{% endif %}\n\nOPENSTACK_HOST = \"{{ kolla_internal_fqdn }}\"\n# TODO(fprzewozn): URL /v3 suffix is required until Horizon bug #2073639 is resolved\nOPENSTACK_KEYSTONE_URL = \"{{ horizon_keystone_url }}/v3\"\nOPENSTACK_KEYSTONE_DEFAULT_ROLE = \"{{ keystone_default_user_role }}\"\n\n{% if enable_keystone_federation | bool %}\nWEBSSO_ENABLED = True\nWEBSSO_KEYSTONE_URL = \"{{ keystone_public_url }}/v3\"\nWEBSSO_CHOICES = (\n (\"credentials\", _(\"Keystone Credentials\")),\n {% for idp in keystone_identity_providers %}\n (\"{{ idp.name }}_{{ idp.protocol }}\", \"{{ idp.public_name }}\"),\n {% endfor %}\n)\nWEBSSO_IDP_MAPPING = {\n{% for idp in keystone_identity_providers %}\n \"{{ idp.name }}_{{ idp.protocol }}\": (\"{{ idp.name }}\", \"{{ idp.protocol }}\"),\n{% endfor %}\n}\n{% endif %}\n\n{% if openstack_cacert == \"\" %}\n{% else %}\nOPENSTACK_SSL_CACERT = '{{ openstack_cacert }}'\n{% endif %}\n\nOPENSTACK_KEYSTONE_BACKEND = {\n 'name': 'native',\n 'can_edit_user': True,\n 'can_edit_group': True,\n 'can_edit_project': True,\n 'can_edit_domain': True,\n 'can_edit_role': True,\n}\n\nOPENSTACK_HYPERVISOR_FEATURES = {\n 'can_set_mount_point': False,\n 'can_set_password': False,\n 'requires_keypair': False,\n 'enable_quotas': True\n}\n\nOPENSTACK_CINDER_FEATURES = {\n 'enable_backup': {{ 'True' if enable_cinder_backup | bool else 'False' }},\n}\n\nOPENSTACK_NEUTRON_NETWORK = {\n 'enable_router': True,\n 'enable_quotas': True,\n 'enable_ipv6': True,\n 'enable_distributed_router': False,\n 'enable_ha_router': False,\n 'enable_lb': True,\n 'enable_firewall': True,\n 'enable_vpn': True,\n 'enable_fip_topology_check': True,\n 'supported_vnic_types': ['*'],\n}\n\nOPENSTACK_HEAT_STACK = {\n 'enable_user_pass': True,\n}\n\n\nIMAGE_CUSTOM_PROPERTY_TITLES = {\n \"architecture\": _(\"Architecture\"),\n \"kernel_id\": _(\"Kernel ID\"),\n \"ramdisk_id\": _(\"Ramdisk ID\"),\n \"image_state\": _(\"Euca2ools state\"),\n \"project_id\": _(\"Project ID\"),\n \"image_type\": _(\"Image Type\"),\n}\n\nIMAGE_RESERVED_CUSTOM_PROPERTIES = []\nHORIZON_IMAGES_UPLOAD_MODE = 'direct'\nOPENSTACK_ENDPOINT_TYPE = \"internalURL\"\nAPI_RESULT_LIMIT = 1000\nAPI_RESULT_PAGE_SIZE = 20\nSWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024\nDROPDOWN_MAX_ITEMS = 30\nTIME_ZONE = \"UTC\"\nPOLICY_FILES_PATH = '/etc/openstack-dashboard'\n\n{% if horizon_custom_themes | length > 0 %}\nAVAILABLE_THEMES = [\n ('default', 'Default', 'themes/default'),\n ('material', 'Material', 'themes/material'),\n{% for theme in horizon_custom_themes %}\n ('{{ theme.name|e }}', '{{ theme.label|e }}', '/etc/openstack-dashboard/themes/{{ theme.name|e }}'),\n{% endfor %}\n]\n{% endif %}\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n 'operation': {\n # The format of \"%(message)s\" is defined by\n # OPERATION_LOG_OPTIONS['format']\n 'format': '%(asctime)s %(message)s'\n },\n },\n 'handlers': {\n 'null': {\n 'level': 'DEBUG',\n 'class': 'logging.NullHandler',\n },\n 'console': {\n # Set the level to \"DEBUG\" for verbose output logging.\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n },\n 'operation': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n 'formatter': 'operation',\n },\n },\n 'loggers': {\n # Logging from django.db.backends is VERY verbose, send to null\n # by default.\n 'django.db.backends': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'requests': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'horizon': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'horizon.operation_log': {\n 'handlers': ['operation'],\n 'level': 'INFO',\n 'propagate': False,\n },\n 'openstack_dashboard': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'novaclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'cinderclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'keystoneclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'glanceclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'neutronclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'heatclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'ceilometerclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'swiftclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'openstack_auth': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'nose.plugins.manager': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'django': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'iso8601': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'scss': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n },\n}\n\nSECURITY_GROUP_RULES = {\n 'all_tcp': {\n 'name': _('All TCP'),\n 'ip_protocol': 'tcp',\n 'from_port': '1',\n 'to_port': '65535',\n },\n 'all_udp': {\n 'name': _('All UDP'),\n 'ip_protocol': 'udp',\n 'from_port': '1',\n 'to_port': '65535',\n },\n 'all_icmp': {\n 'name': _('All ICMP'),\n 'ip_protocol': 'icmp',\n 'from_port': '-1',\n 'to_port': '-1',\n },\n 'ssh': {\n 'name': 'SSH',\n 'ip_protocol': 'tcp',\n 'from_port': '22',\n 'to_port': '22',\n },\n 'smtp': {\n 'name': 'SMTP',\n 'ip_protocol': 'tcp',\n 'from_port': '25',\n 'to_port': '25',\n },\n 'dns': {\n 'name': 'DNS',\n 'ip_protocol': 'tcp',\n 'from_port': '53',\n 'to_port': '53',\n },\n 'http': {\n 'name': 'HTTP',\n 'ip_protocol': 'tcp',\n 'from_port': '80',\n 'to_port': '80',\n },\n 'pop3': {\n 'name': 'POP3',\n 'ip_protocol': 'tcp',\n 'from_port': '110',\n 'to_port': '110',\n },\n 'imap': {\n 'name': 'IMAP',\n 'ip_protocol': 'tcp',\n 'from_port': '143',\n 'to_port': '143',\n },\n 'ldap': {\n 'name': 'LDAP',\n 'ip_protocol': 'tcp',\n 'from_port': '389',\n 'to_port': '389',\n },\n 'https': {\n 'name': 'HTTPS',\n 'ip_protocol': 'tcp',\n 'from_port': '443',\n 'to_port': '443',\n },\n 'smtps': {\n 'name': 'SMTPS',\n 'ip_protocol': 'tcp',\n 'from_port': '465',\n 'to_port': '465',\n },\n 'imaps': {\n 'name': 'IMAPS',\n 'ip_protocol': 'tcp',\n 'from_port': '993',\n 'to_port': '993',\n },\n 'pop3s': {\n 'name': 'POP3S',\n 'ip_protocol': 'tcp',\n 'from_port': '995',\n 'to_port': '995',\n },\n 'ms_sql': {\n 'name': 'MS SQL',\n 'ip_protocol': 'tcp',\n 'from_port': '1433',\n 'to_port': '1433',\n },\n 'mysql': {\n 'name': 'MYSQL',\n 'ip_protocol': 'tcp',\n 'from_port': '3306',\n 'to_port': '3306',\n },\n 'rdp': {\n 'name': 'RDP',\n 'ip_protocol': 'tcp',\n 'from_port': '3389',\n 'to_port': '3389',\n },\n}\n\nREST_API_REQUIRED_SETTINGS = [\n 'CREATE_IMAGE_DEFAULTS',\n 'DEFAULT_BOOT_SOURCE',\n 'ENFORCE_PASSWORD_CHECK',\n 'LAUNCH_INSTANCE_DEFAULTS',\n 'OPENSTACK_HYPERVISOR_FEATURES',\n 'OPENSTACK_IMAGE_FORMATS',\n 'OPENSTACK_KEYSTONE_BACKEND',\n 'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN',\n]\n\n. Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'."} 2025-11-23 00:53:27.778398 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: . Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'. 2025-11-23 00:53:27.778461 | orchestrator | failed: [testbed-node-2] (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) => {"ansible_loop_var": "item", "changed": false, "item": "/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2", "msg": "AnsibleError: template error while templating string: Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'.. String: DEBUG = {{ horizon_logging_debug }}\nTEMPLATE_DEBUG = DEBUG\nCOMPRESS_OFFLINE = True\nWEBROOT = '/'\nALLOWED_HOSTS = ['*']\n\n{% if horizon_backend_database | bool %}\nSESSION_ENGINE = 'django.contrib.sessions.backends.db'\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': '{{ horizon_database_name }}',\n 'USER': '{{ horizon_database_user }}',\n 'PASSWORD': '{{ horizon_database_password }}',\n 'HOST': '{{ database_address }}',\n 'PORT': '{{ database_port }}'\n }\n}\n{% elif groups['memcached'] | length > 0 and not horizon_backend_database | bool %}\nSESSION_ENGINE = 'django.contrib.sessions.backends.cache'\n\n{% if groups['memcached'] | length > 0 %}\nCACHES['default']['LOCATION'] = [{% for host in groups['memcached'] %}'{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ memcached_port }}'{% if not loop.last %},{% endif %}{% endfor %}]\nCACHES['default']['OPTIONS'] = {'ignore_exc': True}\nCACHES['default']['OPTIONS'] = {\n \"no_delay\": True,\n \"ignore_exc\": True,\n \"max_pool_size\": 4,\n \"use_pooling\": True,\n}\n{% endif %}\n\n{% if kolla_enable_tls_external | bool or kolla_enable_tls_internal | bool %}\nSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\nCSRF_COOKIE_SECURE = True\nSESSION_COOKIE_SECURE = True\n{% endif %}\n\nOPENSTACK_API_VERSIONS = {\n \"identity\": 3,\n}\n\nOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = {{ horizon_keystone_multidomain | bool }}\nOPENSTACK_KEYSTONE_DOMAIN_DROPDOWN = {{ 'True' if horizon_keystone_domain_choices|length > 1 else 'False' }}\nOPENSTACK_KEYSTONE_DOMAIN_CHOICES = (\n{% for key, value in horizon_keystone_domain_choices.items() %}\n ('{{ key }}', '{{ value }}'),\n{% endfor %}\n)\n\nLOCAL_PATH = '/tmp'\nSECRET_KEY='{{ horizon_secret_key }}'\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n{% if multiple_regions_names|length > 1 %}\nAVAILABLE_REGIONS = [\n{% for region_name in multiple_regions_names %}\n ('{{ keystone_internal_url }}', '{{ region_name }}'),\n{% endfor %}\n]\n{% endif %}\n\nOPENSTACK_HOST = \"{{ kolla_internal_fqdn }}\"\n# TODO(fprzewozn): URL /v3 suffix is required until Horizon bug #2073639 is resolved\nOPENSTACK_KEYSTONE_URL = \"{{ horizon_keystone_url }}/v3\"\nOPENSTACK_KEYSTONE_DEFAULT_ROLE = \"{{ keystone_default_user_role }}\"\n\n{% if enable_keystone_federation | bool %}\nWEBSSO_ENABLED = True\nWEBSSO_KEYSTONE_URL = \"{{ keystone_public_url }}/v3\"\nWEBSSO_CHOICES = (\n (\"credentials\", _(\"Keystone Credentials\")),\n {% for idp in keystone_identity_providers %}\n (\"{{ idp.name }}_{{ idp.protocol }}\", \"{{ idp.public_name }}\"),\n {% endfor %}\n)\nWEBSSO_IDP_MAPPING = {\n{% for idp in keystone_identity_providers %}\n \"{{ idp.name }}_{{ idp.protocol }}\": (\"{{ idp.name }}\", \"{{ idp.protocol }}\"),\n{% endfor %}\n}\n{% endif %}\n\n{% if openstack_cacert == \"\" %}\n{% else %}\nOPENSTACK_SSL_CACERT = '{{ openstack_cacert }}'\n{% endif %}\n\nOPENSTACK_KEYSTONE_BACKEND = {\n 'name': 'native',\n 'can_edit_user': True,\n 'can_edit_group': True,\n 'can_edit_project': True,\n 'can_edit_domain': True,\n 'can_edit_role': True,\n}\n\nOPENSTACK_HYPERVISOR_FEATURES = {\n 'can_set_mount_point': False,\n 'can_set_password': False,\n 'requires_keypair': False,\n 'enable_quotas': True\n}\n\nOPENSTACK_CINDER_FEATURES = {\n 'enable_backup': {{ 'True' if enable_cinder_backup | bool else 'False' }},\n}\n\nOPENSTACK_NEUTRON_NETWORK = {\n 'enable_router': True,\n 'enable_quotas': True,\n 'enable_ipv6': True,\n 'enable_distributed_router': False,\n 'enable_ha_router': False,\n 'enable_lb': True,\n 'enable_firewall': True,\n 'enable_vpn': True,\n 'enable_fip_topology_check': True,\n 'supported_vnic_types': ['*'],\n}\n\nOPENSTACK_HEAT_STACK = {\n 'enable_user_pass': True,\n}\n\n\nIMAGE_CUSTOM_PROPERTY_TITLES = {\n \"architecture\": _(\"Architecture\"),\n \"kernel_id\": _(\"Kernel ID\"),\n \"ramdisk_id\": _(\"Ramdisk ID\"),\n \"image_state\": _(\"Euca2ools state\"),\n \"project_id\": _(\"Project ID\"),\n \"image_type\": _(\"Image Type\"),\n}\n\nIMAGE_RESERVED_CUSTOM_PROPERTIES = []\nHORIZON_IMAGES_UPLOAD_MODE = 'direct'\nOPENSTACK_ENDPOINT_TYPE = \"internalURL\"\nAPI_RESULT_LIMIT = 1000\nAPI_RESULT_PAGE_SIZE = 20\nSWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024\nDROPDOWN_MAX_ITEMS = 30\nTIME_ZONE = \"UTC\"\nPOLICY_FILES_PATH = '/etc/openstack-dashboard'\n\n{% if horizon_custom_themes | length > 0 %}\nAVAILABLE_THEMES = [\n ('default', 'Default', 'themes/default'),\n ('material', 'Material', 'themes/material'),\n{% for theme in horizon_custom_themes %}\n ('{{ theme.name|e }}', '{{ theme.label|e }}', '/etc/openstack-dashboard/themes/{{ theme.name|e }}'),\n{% endfor %}\n]\n{% endif %}\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n 'operation': {\n # The format of \"%(message)s\" is defined by\n # OPERATION_LOG_OPTIONS['format']\n 'format': '%(asctime)s %(message)s'\n },\n },\n 'handlers': {\n 'null': {\n 'level': 'DEBUG',\n 'class': 'logging.NullHandler',\n },\n 'console': {\n # Set the level to \"DEBUG\" for verbose output logging.\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n },\n 'operation': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n 'formatter': 'operation',\n },\n },\n 'loggers': {\n # Logging from django.db.backends is VERY verbose, send to null\n # by default.\n 'django.db.backends': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'requests': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'horizon': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'horizon.operation_log': {\n 'handlers': ['operation'],\n 'level': 'INFO',\n 'propagate': False,\n },\n 'openstack_dashboard': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'novaclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'cinderclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'keystoneclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'glanceclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'neutronclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'heatclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'ceilometerclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'swiftclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'openstack_auth': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'nose.plugins.manager': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'django': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'iso8601': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'scss': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n },\n}\n\nSECURITY_GROUP_RULES = {\n 'all_tcp': {\n 'name': _('All TCP'),\n 'ip_protocol': 'tcp',\n 'from_port': '1',\n 'to_port': '65535',\n },\n 'all_udp': {\n 'name': _('All UDP'),\n 'ip_protocol': 'udp',\n 'from_port': '1',\n 'to_port': '65535',\n },\n 'all_icmp': {\n 'name': _('All ICMP'),\n 'ip_protocol': 'icmp',\n 'from_port': '-1',\n 'to_port': '-1',\n },\n 'ssh': {\n 'name': 'SSH',\n 'ip_protocol': 'tcp',\n 'from_port': '22',\n 'to_port': '22',\n },\n 'smtp': {\n 'name': 'SMTP',\n 'ip_protocol': 'tcp',\n 'from_port': '25',\n 'to_port': '25',\n },\n 'dns': {\n 'name': 'DNS',\n 'ip_protocol': 'tcp',\n 'from_port': '53',\n 'to_port': '53',\n },\n 'http': {\n 'name': 'HTTP',\n 'ip_protocol': 'tcp',\n 'from_port': '80',\n 'to_port': '80',\n },\n 'pop3': {\n 'name': 'POP3',\n 'ip_protocol': 'tcp',\n 'from_port': '110',\n 'to_port': '110',\n },\n 'imap': {\n 'name': 'IMAP',\n 'ip_protocol': 'tcp',\n 'from_port': '143',\n 'to_port': '143',\n },\n 'ldap': {\n 'name': 'LDAP',\n 'ip_protocol': 'tcp',\n 'from_port': '389',\n 'to_port': '389',\n },\n 'https': {\n 'name': 'HTTPS',\n 'ip_protocol': 'tcp',\n 'from_port': '443',\n 'to_port': '443',\n },\n 'smtps': {\n 'name': 'SMTPS',\n 'ip_protocol': 'tcp',\n 'from_port': '465',\n 'to_port': '465',\n },\n 'imaps': {\n 'name': 'IMAPS',\n 'ip_protocol': 'tcp',\n 'from_port': '993',\n 'to_port': '993',\n },\n 'pop3s': {\n 'name': 'POP3S',\n 'ip_protocol': 'tcp',\n 'from_port': '995',\n 'to_port': '995',\n },\n 'ms_sql': {\n 'name': 'MS SQL',\n 'ip_protocol': 'tcp',\n 'from_port': '1433',\n 'to_port': '1433',\n },\n 'mysql': {\n 'name': 'MYSQL',\n 'ip_protocol': 'tcp',\n 'from_port': '3306',\n 'to_port': '3306',\n },\n 'rdp': {\n 'name': 'RDP',\n 'ip_protocol': 'tcp',\n 'from_port': '3389',\n 'to_port': '3389',\n },\n}\n\nREST_API_REQUIRED_SETTINGS = [\n 'CREATE_IMAGE_DEFAULTS',\n 'DEFAULT_BOOT_SOURCE',\n 'ENFORCE_PASSWORD_CHECK',\n 'LAUNCH_INSTANCE_DEFAULTS',\n 'OPENSTACK_HYPERVISOR_FEATURES',\n 'OPENSTACK_IMAGE_FORMATS',\n 'OPENSTACK_KEYSTONE_BACKEND',\n 'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN',\n]\n\n. Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'."} 2025-11-23 00:53:27.778492 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: . Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'. 2025-11-23 00:53:27.778555 | orchestrator | failed: [testbed-node-1] (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) => {"ansible_loop_var": "item", "changed": false, "item": "/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2", "msg": "AnsibleError: template error while templating string: Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'.. String: DEBUG = {{ horizon_logging_debug }}\nTEMPLATE_DEBUG = DEBUG\nCOMPRESS_OFFLINE = True\nWEBROOT = '/'\nALLOWED_HOSTS = ['*']\n\n{% if horizon_backend_database | bool %}\nSESSION_ENGINE = 'django.contrib.sessions.backends.db'\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': '{{ horizon_database_name }}',\n 'USER': '{{ horizon_database_user }}',\n 'PASSWORD': '{{ horizon_database_password }}',\n 'HOST': '{{ database_address }}',\n 'PORT': '{{ database_port }}'\n }\n}\n{% elif groups['memcached'] | length > 0 and not horizon_backend_database | bool %}\nSESSION_ENGINE = 'django.contrib.sessions.backends.cache'\n\n{% if groups['memcached'] | length > 0 %}\nCACHES['default']['LOCATION'] = [{% for host in groups['memcached'] %}'{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ memcached_port }}'{% if not loop.last %},{% endif %}{% endfor %}]\nCACHES['default']['OPTIONS'] = {'ignore_exc': True}\nCACHES['default']['OPTIONS'] = {\n \"no_delay\": True,\n \"ignore_exc\": True,\n \"max_pool_size\": 4,\n \"use_pooling\": True,\n}\n{% endif %}\n\n{% if kolla_enable_tls_external | bool or kolla_enable_tls_internal | bool %}\nSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\nCSRF_COOKIE_SECURE = True\nSESSION_COOKIE_SECURE = True\n{% endif %}\n\nOPENSTACK_API_VERSIONS = {\n \"identity\": 3,\n}\n\nOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = {{ horizon_keystone_multidomain | bool }}\nOPENSTACK_KEYSTONE_DOMAIN_DROPDOWN = {{ 'True' if horizon_keystone_domain_choices|length > 1 else 'False' }}\nOPENSTACK_KEYSTONE_DOMAIN_CHOICES = (\n{% for key, value in horizon_keystone_domain_choices.items() %}\n ('{{ key }}', '{{ value }}'),\n{% endfor %}\n)\n\nLOCAL_PATH = '/tmp'\nSECRET_KEY='{{ horizon_secret_key }}'\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n\n{% if multiple_regions_names|length > 1 %}\nAVAILABLE_REGIONS = [\n{% for region_name in multiple_regions_names %}\n ('{{ keystone_internal_url }}', '{{ region_name }}'),\n{% endfor %}\n]\n{% endif %}\n\nOPENSTACK_HOST = \"{{ kolla_internal_fqdn }}\"\n# TODO(fprzewozn): URL /v3 suffix is required until Horizon bug #2073639 is resolved\nOPENSTACK_KEYSTONE_URL = \"{{ horizon_keystone_url }}/v3\"\nOPENSTACK_KEYSTONE_DEFAULT_ROLE = \"{{ keystone_default_user_role }}\"\n\n{% if enable_keystone_federation | bool %}\nWEBSSO_ENABLED = True\nWEBSSO_KEYSTONE_URL = \"{{ keystone_public_url }}/v3\"\nWEBSSO_CHOICES = (\n (\"credentials\", _(\"Keystone Credentials\")),\n {% for idp in keystone_identity_providers %}\n (\"{{ idp.name }}_{{ idp.protocol }}\", \"{{ idp.public_name }}\"),\n {% endfor %}\n)\nWEBSSO_IDP_MAPPING = {\n{% for idp in keystone_identity_providers %}\n \"{{ idp.name }}_{{ idp.protocol }}\": (\"{{ idp.name }}\", \"{{ idp.protocol }}\"),\n{% endfor %}\n}\n{% endif %}\n\n{% if openstack_cacert == \"\" %}\n{% else %}\nOPENSTACK_SSL_CACERT = '{{ openstack_cacert }}'\n{% endif %}\n\nOPENSTACK_KEYSTONE_BACKEND = {\n 'name': 'native',\n 'can_edit_user': True,\n 'can_edit_group': True,\n 'can_edit_project': True,\n 'can_edit_domain': True,\n 'can_edit_role': True,\n}\n\nOPENSTACK_HYPERVISOR_FEATURES = {\n 'can_set_mount_point': False,\n 'can_set_password': False,\n 'requires_keypair': False,\n 'enable_quotas': True\n}\n\nOPENSTACK_CINDER_FEATURES = {\n 'enable_backup': {{ 'True' if enable_cinder_backup | bool else 'False' }},\n}\n\nOPENSTACK_NEUTRON_NETWORK = {\n 'enable_router': True,\n 'enable_quotas': True,\n 'enable_ipv6': True,\n 'enable_distributed_router': False,\n 'enable_ha_router': False,\n 'enable_lb': True,\n 'enable_firewall': True,\n 'enable_vpn': True,\n 'enable_fip_topology_check': True,\n 'supported_vnic_types': ['*'],\n}\n\nOPENSTACK_HEAT_STACK = {\n 'enable_user_pass': True,\n}\n\n\nIMAGE_CUSTOM_PROPERTY_TITLES = {\n \"architecture\": _(\"Architecture\"),\n \"kernel_id\": _(\"Kernel ID\"),\n \"ramdisk_id\": _(\"Ramdisk ID\"),\n \"image_state\": _(\"Euca2ools state\"),\n \"project_id\": _(\"Project ID\"),\n \"image_type\": _(\"Image Type\"),\n}\n\nIMAGE_RESERVED_CUSTOM_PROPERTIES = []\nHORIZON_IMAGES_UPLOAD_MODE = 'direct'\nOPENSTACK_ENDPOINT_TYPE = \"internalURL\"\nAPI_RESULT_LIMIT = 1000\nAPI_RESULT_PAGE_SIZE = 20\nSWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024\nDROPDOWN_MAX_ITEMS = 30\nTIME_ZONE = \"UTC\"\nPOLICY_FILES_PATH = '/etc/openstack-dashboard'\n\n{% if horizon_custom_themes | length > 0 %}\nAVAILABLE_THEMES = [\n ('default', 'Default', 'themes/default'),\n ('material', 'Material', 'themes/material'),\n{% for theme in horizon_custom_themes %}\n ('{{ theme.name|e }}', '{{ theme.label|e }}', '/etc/openstack-dashboard/themes/{{ theme.name|e }}'),\n{% endfor %}\n]\n{% endif %}\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n 'operation': {\n # The format of \"%(message)s\" is defined by\n # OPERATION_LOG_OPTIONS['format']\n 'format': '%(asctime)s %(message)s'\n },\n },\n 'handlers': {\n 'null': {\n 'level': 'DEBUG',\n 'class': 'logging.NullHandler',\n },\n 'console': {\n # Set the level to \"DEBUG\" for verbose output logging.\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n },\n 'operation': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n 'formatter': 'operation',\n },\n },\n 'loggers': {\n # Logging from django.db.backends is VERY verbose, send to null\n # by default.\n 'django.db.backends': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'requests': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'horizon': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'horizon.operation_log': {\n 'handlers': ['operation'],\n 'level': 'INFO',\n 'propagate': False,\n },\n 'openstack_dashboard': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'novaclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'cinderclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'keystoneclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'glanceclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'neutronclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'heatclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'ceilometerclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'swiftclient': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'openstack_auth': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'nose.plugins.manager': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'django': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': False,\n },\n 'iso8601': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n 'scss': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n },\n}\n\nSECURITY_GROUP_RULES = {\n 'all_tcp': {\n 'name': _('All TCP'),\n 'ip_protocol': 'tcp',\n 'from_port': '1',\n 'to_port': '65535',\n },\n 'all_udp': {\n 'name': _('All UDP'),\n 'ip_protocol': 'udp',\n 'from_port': '1',\n 'to_port': '65535',\n },\n 'all_icmp': {\n 'name': _('All ICMP'),\n 'ip_protocol': 'icmp',\n 'from_port': '-1',\n 'to_port': '-1',\n },\n 'ssh': {\n 'name': 'SSH',\n 'ip_protocol': 'tcp',\n 'from_port': '22',\n 'to_port': '22',\n },\n 'smtp': {\n 'name': 'SMTP',\n 'ip_protocol': 'tcp',\n 'from_port': '25',\n 'to_port': '25',\n },\n 'dns': {\n 'name': 'DNS',\n 'ip_protocol': 'tcp',\n 'from_port': '53',\n 'to_port': '53',\n },\n 'http': {\n 'name': 'HTTP',\n 'ip_protocol': 'tcp',\n 'from_port': '80',\n 'to_port': '80',\n },\n 'pop3': {\n 'name': 'POP3',\n 'ip_protocol': 'tcp',\n 'from_port': '110',\n 'to_port': '110',\n },\n 'imap': {\n 'name': 'IMAP',\n 'ip_protocol': 'tcp',\n 'from_port': '143',\n 'to_port': '143',\n },\n 'ldap': {\n 'name': 'LDAP',\n 'ip_protocol': 'tcp',\n 'from_port': '389',\n 'to_port': '389',\n },\n 'https': {\n 'name': 'HTTPS',\n 'ip_protocol': 'tcp',\n 'from_port': '443',\n 'to_port': '443',\n },\n 'smtps': {\n 'name': 'SMTPS',\n 'ip_protocol': 'tcp',\n 'from_port': '465',\n 'to_port': '465',\n },\n 'imaps': {\n 'name': 'IMAPS',\n 'ip_protocol': 'tcp',\n 'from_port': '993',\n 'to_port': '993',\n },\n 'pop3s': {\n 'name': 'POP3S',\n 'ip_protocol': 'tcp',\n 'from_port': '995',\n 'to_port': '995',\n },\n 'ms_sql': {\n 'name': 'MS SQL',\n 'ip_protocol': 'tcp',\n 'from_port': '1433',\n 'to_port': '1433',\n },\n 'mysql': {\n 'name': 'MYSQL',\n 'ip_protocol': 'tcp',\n 'from_port': '3306',\n 'to_port': '3306',\n },\n 'rdp': {\n 'name': 'RDP',\n 'ip_protocol': 'tcp',\n 'from_port': '3389',\n 'to_port': '3389',\n },\n}\n\nREST_API_REQUIRED_SETTINGS = [\n 'CREATE_IMAGE_DEFAULTS',\n 'DEFAULT_BOOT_SOURCE',\n 'ENFORCE_PASSWORD_CHECK',\n 'LAUNCH_INSTANCE_DEFAULTS',\n 'OPENSTACK_HYPERVISOR_FEATURES',\n 'OPENSTACK_IMAGE_FORMATS',\n 'OPENSTACK_KEYSTONE_BACKEND',\n 'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN',\n]\n\n. Unexpected end of template. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'."} 2025-11-23 00:53:27.778583 | orchestrator | 2025-11-23 00:53:27.778594 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:53:27.778605 | orchestrator | testbed-node-0 : ok=27  changed=3  unreachable=0 failed=1  skipped=20  rescued=0 ignored=0 2025-11-23 00:53:27.778616 | orchestrator | testbed-node-1 : ok=27  changed=3  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2025-11-23 00:53:27.778627 | orchestrator | testbed-node-2 : ok=27  changed=3  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2025-11-23 00:53:27.778648 | orchestrator | 2025-11-23 00:53:27.778659 | orchestrator | 2025-11-23 00:53:27.778670 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:53:27.778680 | orchestrator | Sunday 23 November 2025 00:53:25 +0000 (0:00:00.725) 0:00:14.949 ******* 2025-11-23 00:53:27.778689 | orchestrator | =============================================================================== 2025-11-23 00:53:27.778698 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.55s 2025-11-23 00:53:27.778708 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.55s 2025-11-23 00:53:27.778717 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.25s 2025-11-23 00:53:27.778727 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 0.73s 2025-11-23 00:53:27.778736 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2025-11-23 00:53:27.778745 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.46s 2025-11-23 00:53:27.778755 | orchestrator | horizon : Update policy file name --------------------------------------- 0.42s 2025-11-23 00:53:27.778764 | orchestrator | horizon : Update policy file name --------------------------------------- 0.42s 2025-11-23 00:53:27.778774 | orchestrator | horizon : Update policy file name --------------------------------------- 0.41s 2025-11-23 00:53:27.778783 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.39s 2025-11-23 00:53:27.778792 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.38s 2025-11-23 00:53:27.778801 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-11-23 00:53:27.778811 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.37s 2025-11-23 00:53:27.778820 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.32s 2025-11-23 00:53:27.778830 | orchestrator | horizon : Update policy file name --------------------------------------- 0.31s 2025-11-23 00:53:27.778839 | orchestrator | horizon : Update policy file name --------------------------------------- 0.30s 2025-11-23 00:53:27.778848 | orchestrator | horizon : Update policy file name --------------------------------------- 0.29s 2025-11-23 00:53:27.778863 | orchestrator | horizon : Update policy file name --------------------------------------- 0.28s 2025-11-23 00:53:27.778873 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.28s 2025-11-23 00:53:27.778882 | orchestrator | horizon : Update policy file name --------------------------------------- 0.28s 2025-11-23 00:53:30.810072 | orchestrator | 2025-11-23 00:53:30 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:30.810655 | orchestrator | 2025-11-23 00:53:30 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:30.810687 | orchestrator | 2025-11-23 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:33.844760 | orchestrator | 2025-11-23 00:53:33 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:33.845834 | orchestrator | 2025-11-23 00:53:33 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:33.845845 | orchestrator | 2025-11-23 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:36.891148 | orchestrator | 2025-11-23 00:53:36 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:36.893727 | orchestrator | 2025-11-23 00:53:36 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:36.893774 | orchestrator | 2025-11-23 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:39.933121 | orchestrator | 2025-11-23 00:53:39 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:39.933737 | orchestrator | 2025-11-23 00:53:39 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:39.933837 | orchestrator | 2025-11-23 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:42.972835 | orchestrator | 2025-11-23 00:53:42 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:42.975054 | orchestrator | 2025-11-23 00:53:42 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:42.975150 | orchestrator | 2025-11-23 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:46.028301 | orchestrator | 2025-11-23 00:53:46 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:46.028446 | orchestrator | 2025-11-23 00:53:46 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:46.028465 | orchestrator | 2025-11-23 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:49.060821 | orchestrator | 2025-11-23 00:53:49 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:49.062074 | orchestrator | 2025-11-23 00:53:49 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:49.062154 | orchestrator | 2025-11-23 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:52.107944 | orchestrator | 2025-11-23 00:53:52 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:52.110616 | orchestrator | 2025-11-23 00:53:52 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:52.111096 | orchestrator | 2025-11-23 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:55.156854 | orchestrator | 2025-11-23 00:53:55 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:55.158840 | orchestrator | 2025-11-23 00:53:55 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:55.158915 | orchestrator | 2025-11-23 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:53:58.202152 | orchestrator | 2025-11-23 00:53:58 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:53:58.203425 | orchestrator | 2025-11-23 00:53:58 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:53:58.203623 | orchestrator | 2025-11-23 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:01.243709 | orchestrator | 2025-11-23 00:54:01 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:54:01.245369 | orchestrator | 2025-11-23 00:54:01 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:01.245389 | orchestrator | 2025-11-23 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:04.288175 | orchestrator | 2025-11-23 00:54:04 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:54:04.289226 | orchestrator | 2025-11-23 00:54:04 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:04.289275 | orchestrator | 2025-11-23 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:07.329114 | orchestrator | 2025-11-23 00:54:07 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:54:07.331533 | orchestrator | 2025-11-23 00:54:07 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:07.331722 | orchestrator | 2025-11-23 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:10.375082 | orchestrator | 2025-11-23 00:54:10 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:54:10.375991 | orchestrator | 2025-11-23 00:54:10 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:10.376024 | orchestrator | 2025-11-23 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:13.408578 | orchestrator | 2025-11-23 00:54:13 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state STARTED 2025-11-23 00:54:13.410796 | orchestrator | 2025-11-23 00:54:13 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:13.411348 | orchestrator | 2025-11-23 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:16.462070 | orchestrator | 2025-11-23 00:54:16.462155 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-23 00:54:16.462167 | orchestrator | 2.16.14 2025-11-23 00:54:16.462175 | orchestrator | 2025-11-23 00:54:16.462182 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-11-23 00:54:16.462189 | orchestrator | 2025-11-23 00:54:16.462196 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-23 00:54:16.462203 | orchestrator | Sunday 23 November 2025 00:52:09 +0000 (0:00:00.546) 0:00:00.546 ******* 2025-11-23 00:54:16.462224 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:54:16.462232 | orchestrator | 2025-11-23 00:54:16.462239 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-23 00:54:16.462245 | orchestrator | Sunday 23 November 2025 00:52:09 +0000 (0:00:00.562) 0:00:01.109 ******* 2025-11-23 00:54:16.462252 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.462259 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.462275 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.462282 | orchestrator | 2025-11-23 00:54:16.462289 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-23 00:54:16.462296 | orchestrator | Sunday 23 November 2025 00:52:10 +0000 (0:00:00.567) 0:00:01.677 ******* 2025-11-23 00:54:16.462302 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.462309 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.462412 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.462423 | orchestrator | 2025-11-23 00:54:16.462470 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-23 00:54:16.462479 | orchestrator | Sunday 23 November 2025 00:52:10 +0000 (0:00:00.282) 0:00:01.959 ******* 2025-11-23 00:54:16.462486 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.462564 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.462851 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.462861 | orchestrator | 2025-11-23 00:54:16.462868 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-23 00:54:16.462875 | orchestrator | Sunday 23 November 2025 00:52:11 +0000 (0:00:00.700) 0:00:02.660 ******* 2025-11-23 00:54:16.462883 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.462890 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.462898 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.462905 | orchestrator | 2025-11-23 00:54:16.462913 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-23 00:54:16.462920 | orchestrator | Sunday 23 November 2025 00:52:11 +0000 (0:00:00.266) 0:00:02.927 ******* 2025-11-23 00:54:16.462928 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.462935 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.462943 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.462950 | orchestrator | 2025-11-23 00:54:16.462958 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-23 00:54:16.462965 | orchestrator | Sunday 23 November 2025 00:52:11 +0000 (0:00:00.298) 0:00:03.225 ******* 2025-11-23 00:54:16.462972 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.462978 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.462985 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.462992 | orchestrator | 2025-11-23 00:54:16.463044 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-23 00:54:16.463052 | orchestrator | Sunday 23 November 2025 00:52:12 +0000 (0:00:00.302) 0:00:03.528 ******* 2025-11-23 00:54:16.463059 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463066 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.463073 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.463080 | orchestrator | 2025-11-23 00:54:16.463086 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-23 00:54:16.463093 | orchestrator | Sunday 23 November 2025 00:52:12 +0000 (0:00:00.435) 0:00:03.963 ******* 2025-11-23 00:54:16.463099 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.463106 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.463257 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.463268 | orchestrator | 2025-11-23 00:54:16.463275 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-23 00:54:16.463281 | orchestrator | Sunday 23 November 2025 00:52:12 +0000 (0:00:00.275) 0:00:04.239 ******* 2025-11-23 00:54:16.463288 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:54:16.463295 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:54:16.463301 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:54:16.463308 | orchestrator | 2025-11-23 00:54:16.463315 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-23 00:54:16.463340 | orchestrator | Sunday 23 November 2025 00:52:13 +0000 (0:00:00.608) 0:00:04.848 ******* 2025-11-23 00:54:16.463347 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.463354 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.463361 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.463367 | orchestrator | 2025-11-23 00:54:16.463374 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-23 00:54:16.463381 | orchestrator | Sunday 23 November 2025 00:52:13 +0000 (0:00:00.419) 0:00:05.267 ******* 2025-11-23 00:54:16.463387 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:54:16.463394 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:54:16.463402 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:54:16.463408 | orchestrator | 2025-11-23 00:54:16.463415 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-23 00:54:16.463422 | orchestrator | Sunday 23 November 2025 00:52:15 +0000 (0:00:01.994) 0:00:07.262 ******* 2025-11-23 00:54:16.463428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-23 00:54:16.463435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-23 00:54:16.463442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-23 00:54:16.463449 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463456 | orchestrator | 2025-11-23 00:54:16.463494 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-23 00:54:16.463502 | orchestrator | Sunday 23 November 2025 00:52:16 +0000 (0:00:00.571) 0:00:07.833 ******* 2025-11-23 00:54:16.463511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.463527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.463534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.463548 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463555 | orchestrator | 2025-11-23 00:54:16.463562 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-23 00:54:16.463568 | orchestrator | Sunday 23 November 2025 00:52:17 +0000 (0:00:00.686) 0:00:08.520 ******* 2025-11-23 00:54:16.463577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.463586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.463593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.463600 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463607 | orchestrator | 2025-11-23 00:54:16.463614 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-23 00:54:16.463620 | orchestrator | Sunday 23 November 2025 00:52:17 +0000 (0:00:00.283) 0:00:08.803 ******* 2025-11-23 00:54:16.463629 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '15cb635051f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-23 00:52:14.632367', 'end': '2025-11-23 00:52:14.672830', 'delta': '0:00:00.040463', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['15cb635051f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-11-23 00:54:16.463639 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e1ccf88a945d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-23 00:52:15.268824', 'end': '2025-11-23 00:52:15.320165', 'delta': '0:00:00.051341', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e1ccf88a945d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-11-23 00:54:16.463670 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c6bc56486b0b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-23 00:52:15.783044', 'end': '2025-11-23 00:52:15.832634', 'delta': '0:00:00.049590', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6bc56486b0b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-11-23 00:54:16.463685 | orchestrator | 2025-11-23 00:54:16.463692 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-23 00:54:16.463699 | orchestrator | Sunday 23 November 2025 00:52:17 +0000 (0:00:00.178) 0:00:08.981 ******* 2025-11-23 00:54:16.463706 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.463712 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.463719 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.463726 | orchestrator | 2025-11-23 00:54:16.463732 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-23 00:54:16.463739 | orchestrator | Sunday 23 November 2025 00:52:18 +0000 (0:00:00.426) 0:00:09.408 ******* 2025-11-23 00:54:16.463745 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-11-23 00:54:16.463752 | orchestrator | 2025-11-23 00:54:16.463758 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-23 00:54:16.463765 | orchestrator | Sunday 23 November 2025 00:52:19 +0000 (0:00:01.617) 0:00:11.025 ******* 2025-11-23 00:54:16.463772 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463778 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.463785 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.463791 | orchestrator | 2025-11-23 00:54:16.463798 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-23 00:54:16.463804 | orchestrator | Sunday 23 November 2025 00:52:19 +0000 (0:00:00.257) 0:00:11.282 ******* 2025-11-23 00:54:16.463811 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463818 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.463824 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.463831 | orchestrator | 2025-11-23 00:54:16.463837 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-23 00:54:16.463844 | orchestrator | Sunday 23 November 2025 00:52:20 +0000 (0:00:00.406) 0:00:11.689 ******* 2025-11-23 00:54:16.463851 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463857 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.463864 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.463870 | orchestrator | 2025-11-23 00:54:16.463877 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-23 00:54:16.463884 | orchestrator | Sunday 23 November 2025 00:52:20 +0000 (0:00:00.376) 0:00:12.065 ******* 2025-11-23 00:54:16.463890 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.463897 | orchestrator | 2025-11-23 00:54:16.463903 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-23 00:54:16.463911 | orchestrator | Sunday 23 November 2025 00:52:20 +0000 (0:00:00.120) 0:00:12.186 ******* 2025-11-23 00:54:16.463919 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463926 | orchestrator | 2025-11-23 00:54:16.463934 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-23 00:54:16.463942 | orchestrator | Sunday 23 November 2025 00:52:21 +0000 (0:00:00.211) 0:00:12.398 ******* 2025-11-23 00:54:16.463950 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.463957 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.463965 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.463972 | orchestrator | 2025-11-23 00:54:16.463979 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-23 00:54:16.463987 | orchestrator | Sunday 23 November 2025 00:52:21 +0000 (0:00:00.286) 0:00:12.684 ******* 2025-11-23 00:54:16.463995 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.464002 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.464010 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.464018 | orchestrator | 2025-11-23 00:54:16.464025 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-23 00:54:16.464038 | orchestrator | Sunday 23 November 2025 00:52:21 +0000 (0:00:00.304) 0:00:12.989 ******* 2025-11-23 00:54:16.464046 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.464054 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.464062 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.464069 | orchestrator | 2025-11-23 00:54:16.464077 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-23 00:54:16.464085 | orchestrator | Sunday 23 November 2025 00:52:22 +0000 (0:00:00.459) 0:00:13.449 ******* 2025-11-23 00:54:16.464092 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.464099 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.464107 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.464115 | orchestrator | 2025-11-23 00:54:16.464122 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-23 00:54:16.464130 | orchestrator | Sunday 23 November 2025 00:52:22 +0000 (0:00:00.312) 0:00:13.762 ******* 2025-11-23 00:54:16.464137 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.464145 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.464152 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.464160 | orchestrator | 2025-11-23 00:54:16.464167 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-23 00:54:16.464175 | orchestrator | Sunday 23 November 2025 00:52:22 +0000 (0:00:00.294) 0:00:14.056 ******* 2025-11-23 00:54:16.464183 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.464190 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.464198 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.464223 | orchestrator | 2025-11-23 00:54:16.464231 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-23 00:54:16.464238 | orchestrator | Sunday 23 November 2025 00:52:23 +0000 (0:00:00.314) 0:00:14.371 ******* 2025-11-23 00:54:16.464244 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.464251 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.464257 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.464264 | orchestrator | 2025-11-23 00:54:16.464271 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-23 00:54:16.464280 | orchestrator | Sunday 23 November 2025 00:52:23 +0000 (0:00:00.445) 0:00:14.817 ******* 2025-11-23 00:54:16.464288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6', 'dm-uuid-LVM-ZhGuNkuCYqZ22eeL4QIwElfPuYQmz8FFk4w4fzk1FIBSmAJBMs9l4Qsgvq1IIDXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df', 'dm-uuid-LVM-SWp7HzaJchIhI58WXkMnP8eIugd2c5So0aicnxK8wFcHH4NW03reknDLUhYbEvs4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GBn3eA-uLy5-A6Ym-2hMg-2a6o-thuD-CoyvUV', 'scsi-0QEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0', 'scsi-SQEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WX2Qma-XamP-cx7n-eYdI-GpT2-3dkl-f9Ja5e', 'scsi-0QEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356', 'scsi-SQEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f', 'scsi-SQEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07', 'dm-uuid-LVM-lUT9gI4lTJmblmstgY3lht2ya3ox2wczhMCrF6ZBLgU835h33UNldGtJ6SvNUZTd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74', 'dm-uuid-LVM-z2ZcrJagA2yYRVfFvkDYSOppstHO3tUqxpuYyNzjcKzfq5DuY7sDUqsVJCykIotj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464568 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.464576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRexkV-tahH-yA2u-ydrq-8lFY-h4Zu-7IwRw9', 'scsi-0QEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1', 'scsi-SQEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LKWh2Q-vidt-Iviq-pICe-u2at-FlnH-kwWZt0', 'scsi-0QEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4', 'scsi-SQEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f', 'scsi-SQEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f', 'dm-uuid-LVM-mT6XHzw82IsYAS3eWV9p9TcL5Wbh6CszDyTXscQW5taWaSdCTO19EFRaWHgLZy7A'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a', 'dm-uuid-LVM-92FCts3GZ5oL8rtXoAyX1IOghxPDxEUkw2J2BM2aYzwtJmNsmvzyRmRQMfeR1BQg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_devic2025-11-23 00:54:16 | INFO  | Task eb20e15b-9618-4b7d-ae81-c0f6aabf7032 is in state SUCCESS 2025-11-23 00:54:16.464697 | orchestrator | e_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464705 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.464716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-23 00:54:16.464789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qcAps8-tIDI-tYH2-CroA-Vvkw-JSTi-Q4ra27', 'scsi-0QEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa', 'scsi-SQEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-96Gig0-9IFG-aPZ1-2t0N-1h63-VfI2-acyoik', 'scsi-0QEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b', 'scsi-SQEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f', 'scsi-SQEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-23 00:54:16.464837 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.464844 | orchestrator | 2025-11-23 00:54:16.464854 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-23 00:54:16.464861 | orchestrator | Sunday 23 November 2025 00:52:24 +0000 (0:00:00.505) 0:00:15.323 ******* 2025-11-23 00:54:16.464869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6', 'dm-uuid-LVM-ZhGuNkuCYqZ22eeL4QIwElfPuYQmz8FFk4w4fzk1FIBSmAJBMs9l4Qsgvq1IIDXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df', 'dm-uuid-LVM-SWp7HzaJchIhI58WXkMnP8eIugd2c5So0aicnxK8wFcHH4NW03reknDLUhYbEvs4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07', 'dm-uuid-LVM-lUT9gI4lTJmblmstgY3lht2ya3ox2wczhMCrF6ZBLgU835h33UNldGtJ6SvNUZTd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464973 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74', 'dm-uuid-LVM-z2ZcrJagA2yYRVfFvkDYSOppstHO3tUqxpuYyNzjcKzfq5DuY7sDUqsVJCykIotj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_624b486d-3dba-4024-bac7-13317dda40b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.464994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b63f9958--8ac2--53b3--b8b4--a449f25b1af6-osd--block--b63f9958--8ac2--53b3--b8b4--a449f25b1af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GBn3eA-uLy5-A6Ym-2hMg-2a6o-thuD-CoyvUV', 'scsi-0QEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0', 'scsi-SQEMU_QEMU_HARDDISK_d3bc663b-2fb7-4f3a-80f5-8fec376801b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465024 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--939e3465--cd43--5a63--a3e3--1280596736df-osd--block--939e3465--cd43--5a63--a3e3--1280596736df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WX2Qma-XamP-cx7n-eYdI-GpT2-3dkl-f9Ja5e', 'scsi-0QEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356', 'scsi-SQEMU_QEMU_HARDDISK_2b7e306c-9c4d-42db-9fc4-69fec959c356'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f', 'scsi-SQEMU_QEMU_HARDDISK_6228c6cf-84a4-441a-8cc9-9597cabd600f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465078 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465115 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b068fe4-9aa6-4103-84ba-dc9167f04e78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c884493c--7b6c--5149--8c24--d999b26a8d07-osd--block--c884493c--7b6c--5149--8c24--d999b26a8d07'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BRexkV-tahH-yA2u-ydrq-8lFY-h4Zu-7IwRw9', 'scsi-0QEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1', 'scsi-SQEMU_QEMU_HARDDISK_9bb12db9-718e-4660-80a8-4889452babe1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1076031f--9245--50d5--902f--2c37ef490a74-osd--block--1076031f--9245--50d5--902f--2c37ef490a74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LKWh2Q-vidt-Iviq-pICe-u2at-FlnH-kwWZt0', 'scsi-0QEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4', 'scsi-SQEMU_QEMU_HARDDISK_8067a508-692c-4377-81f7-31a1d1b351f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465145 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f', 'scsi-SQEMU_QEMU_HARDDISK_8a2d036f-63dd-4edf-8f40-5cb15ccba33f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465159 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465171 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465178 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f', 'dm-uuid-LVM-mT6XHzw82IsYAS3eWV9p9TcL5Wbh6CszDyTXscQW5taWaSdCTO19EFRaWHgLZy7A'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a', 'dm-uuid-LVM-92FCts3GZ5oL8rtXoAyX1IOghxPDxEUkw2J2BM2aYzwtJmNsmvzyRmRQMfeR1BQg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465246 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465260 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465276 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16', 'scsi-SQEMU_QEMU_HARDDISK_48181c8e-5a9a-4def-86fd-b6a2b5ab4b67-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465289 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e77b7216--a915--581b--8f3c--a7fc3e50862f-osd--block--e77b7216--a915--581b--8f3c--a7fc3e50862f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qcAps8-tIDI-tYH2-CroA-Vvkw-JSTi-Q4ra27', 'scsi-0QEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa', 'scsi-SQEMU_QEMU_HARDDISK_5ed148ed-cabe-49ec-beea-f05b5632a7aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--889c1fef--e00e--5a44--b704--8d22cfa7cd7a-osd--block--889c1fef--e00e--5a44--b704--8d22cfa7cd7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-96Gig0-9IFG-aPZ1-2t0N-1h63-VfI2-acyoik', 'scsi-0QEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b', 'scsi-SQEMU_QEMU_HARDDISK_0964e8b1-b5e3-4f47-9890-2712ab1da39b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465303 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f', 'scsi-SQEMU_QEMU_HARDDISK_90348fbb-4b76-43ea-ac95-9b7258782d3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465374 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-23-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-23 00:54:16.465384 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465391 | orchestrator | 2025-11-23 00:54:16.465397 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-23 00:54:16.465404 | orchestrator | Sunday 23 November 2025 00:52:24 +0000 (0:00:00.687) 0:00:16.010 ******* 2025-11-23 00:54:16.465411 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.465418 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.465424 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.465431 | orchestrator | 2025-11-23 00:54:16.465437 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-23 00:54:16.465444 | orchestrator | Sunday 23 November 2025 00:52:25 +0000 (0:00:00.749) 0:00:16.760 ******* 2025-11-23 00:54:16.465450 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.465457 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.465464 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.465470 | orchestrator | 2025-11-23 00:54:16.465477 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-23 00:54:16.465483 | orchestrator | Sunday 23 November 2025 00:52:25 +0000 (0:00:00.444) 0:00:17.204 ******* 2025-11-23 00:54:16.465490 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.465496 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.465502 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.465509 | orchestrator | 2025-11-23 00:54:16.465515 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-23 00:54:16.465522 | orchestrator | Sunday 23 November 2025 00:52:26 +0000 (0:00:00.637) 0:00:17.841 ******* 2025-11-23 00:54:16.465529 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465535 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465542 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465548 | orchestrator | 2025-11-23 00:54:16.465555 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-23 00:54:16.465561 | orchestrator | Sunday 23 November 2025 00:52:26 +0000 (0:00:00.265) 0:00:18.107 ******* 2025-11-23 00:54:16.465568 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465574 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465581 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465587 | orchestrator | 2025-11-23 00:54:16.465594 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-23 00:54:16.465600 | orchestrator | Sunday 23 November 2025 00:52:27 +0000 (0:00:00.396) 0:00:18.503 ******* 2025-11-23 00:54:16.465607 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465613 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465620 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465626 | orchestrator | 2025-11-23 00:54:16.465632 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-23 00:54:16.465646 | orchestrator | Sunday 23 November 2025 00:52:27 +0000 (0:00:00.426) 0:00:18.930 ******* 2025-11-23 00:54:16.465653 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-23 00:54:16.465659 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-23 00:54:16.465666 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-23 00:54:16.465672 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-23 00:54:16.465679 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-23 00:54:16.465685 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-23 00:54:16.465692 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-23 00:54:16.465698 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-23 00:54:16.465705 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-23 00:54:16.465711 | orchestrator | 2025-11-23 00:54:16.465718 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-23 00:54:16.465724 | orchestrator | Sunday 23 November 2025 00:52:28 +0000 (0:00:00.761) 0:00:19.692 ******* 2025-11-23 00:54:16.465731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-23 00:54:16.465737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-23 00:54:16.465744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-23 00:54:16.465750 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-23 00:54:16.465763 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-23 00:54:16.465770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-23 00:54:16.465776 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-23 00:54:16.465789 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-23 00:54:16.465796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-23 00:54:16.465802 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465808 | orchestrator | 2025-11-23 00:54:16.465815 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-23 00:54:16.465822 | orchestrator | Sunday 23 November 2025 00:52:28 +0000 (0:00:00.332) 0:00:20.024 ******* 2025-11-23 00:54:16.465828 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:54:16.465835 | orchestrator | 2025-11-23 00:54:16.465846 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-23 00:54:16.465853 | orchestrator | Sunday 23 November 2025 00:52:29 +0000 (0:00:00.630) 0:00:20.655 ******* 2025-11-23 00:54:16.465860 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465867 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465873 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465880 | orchestrator | 2025-11-23 00:54:16.465886 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-23 00:54:16.465893 | orchestrator | Sunday 23 November 2025 00:52:29 +0000 (0:00:00.266) 0:00:20.921 ******* 2025-11-23 00:54:16.465899 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465911 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465918 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465924 | orchestrator | 2025-11-23 00:54:16.465931 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-23 00:54:16.465937 | orchestrator | Sunday 23 November 2025 00:52:29 +0000 (0:00:00.296) 0:00:21.218 ******* 2025-11-23 00:54:16.465944 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.465950 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.465957 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:54:16.465963 | orchestrator | 2025-11-23 00:54:16.465970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-23 00:54:16.465982 | orchestrator | Sunday 23 November 2025 00:52:30 +0000 (0:00:00.272) 0:00:21.491 ******* 2025-11-23 00:54:16.465988 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.465995 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.466001 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.466008 | orchestrator | 2025-11-23 00:54:16.466037 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-23 00:54:16.466046 | orchestrator | Sunday 23 November 2025 00:52:30 +0000 (0:00:00.679) 0:00:22.170 ******* 2025-11-23 00:54:16.466052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:54:16.466059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:54:16.466065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:54:16.466072 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.466078 | orchestrator | 2025-11-23 00:54:16.466085 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-23 00:54:16.466092 | orchestrator | Sunday 23 November 2025 00:52:31 +0000 (0:00:00.363) 0:00:22.534 ******* 2025-11-23 00:54:16.466098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:54:16.466104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:54:16.466111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:54:16.466118 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.466124 | orchestrator | 2025-11-23 00:54:16.466131 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-23 00:54:16.466137 | orchestrator | Sunday 23 November 2025 00:52:31 +0000 (0:00:00.337) 0:00:22.872 ******* 2025-11-23 00:54:16.466144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-23 00:54:16.466151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-23 00:54:16.466157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-23 00:54:16.466164 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.466170 | orchestrator | 2025-11-23 00:54:16.466177 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-23 00:54:16.466183 | orchestrator | Sunday 23 November 2025 00:52:31 +0000 (0:00:00.340) 0:00:23.212 ******* 2025-11-23 00:54:16.466190 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:54:16.466196 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:54:16.466203 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:54:16.466210 | orchestrator | 2025-11-23 00:54:16.466216 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-23 00:54:16.466223 | orchestrator | Sunday 23 November 2025 00:52:32 +0000 (0:00:00.295) 0:00:23.508 ******* 2025-11-23 00:54:16.466229 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-23 00:54:16.466236 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-23 00:54:16.466242 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-23 00:54:16.466249 | orchestrator | 2025-11-23 00:54:16.466256 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-23 00:54:16.466262 | orchestrator | Sunday 23 November 2025 00:52:32 +0000 (0:00:00.441) 0:00:23.949 ******* 2025-11-23 00:54:16.466269 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:54:16.466275 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:54:16.466282 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:54:16.466288 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-23 00:54:16.466295 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-23 00:54:16.466301 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-23 00:54:16.466308 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-23 00:54:16.466319 | orchestrator | 2025-11-23 00:54:16.466339 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-23 00:54:16.466345 | orchestrator | Sunday 23 November 2025 00:52:33 +0000 (0:00:00.849) 0:00:24.799 ******* 2025-11-23 00:54:16.466352 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-23 00:54:16.466359 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-23 00:54:16.466365 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-23 00:54:16.466372 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-23 00:54:16.466383 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-23 00:54:16.466390 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-23 00:54:16.466396 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-23 00:54:16.466403 | orchestrator | 2025-11-23 00:54:16.466409 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-11-23 00:54:16.466416 | orchestrator | Sunday 23 November 2025 00:52:35 +0000 (0:00:01.654) 0:00:26.453 ******* 2025-11-23 00:54:16.466423 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:54:16.466433 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:54:16.466440 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-11-23 00:54:16.466446 | orchestrator | 2025-11-23 00:54:16.466453 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-11-23 00:54:16.466459 | orchestrator | Sunday 23 November 2025 00:52:35 +0000 (0:00:00.366) 0:00:26.819 ******* 2025-11-23 00:54:16.466466 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-23 00:54:16.466475 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-23 00:54:16.466482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-23 00:54:16.466489 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-23 00:54:16.466496 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-23 00:54:16.466502 | orchestrator | 2025-11-23 00:54:16.466509 | orchestrator | TASK [generate keys] *********************************************************** 2025-11-23 00:54:16.466516 | orchestrator | Sunday 23 November 2025 00:53:21 +0000 (0:00:45.743) 0:01:12.563 ******* 2025-11-23 00:54:16.466522 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466535 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466541 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466553 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466559 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466566 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-11-23 00:54:16.466572 | orchestrator | 2025-11-23 00:54:16.466579 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-11-23 00:54:16.466585 | orchestrator | Sunday 23 November 2025 00:53:46 +0000 (0:00:24.816) 0:01:37.379 ******* 2025-11-23 00:54:16.466592 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466598 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466605 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466611 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466618 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466624 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466631 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-23 00:54:16.466638 | orchestrator | 2025-11-23 00:54:16.466644 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-11-23 00:54:16.466651 | orchestrator | Sunday 23 November 2025 00:53:57 +0000 (0:00:11.852) 0:01:49.232 ******* 2025-11-23 00:54:16.466657 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466664 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:54:16.466670 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:54:16.466681 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466688 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:54:16.466695 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:54:16.466701 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466708 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:54:16.466717 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:54:16.466724 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466731 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:54:16.466737 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:54:16.466744 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466750 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:54:16.466757 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:54:16.466763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-23 00:54:16.466770 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-23 00:54:16.466776 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-23 00:54:16.466783 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-11-23 00:54:16.466789 | orchestrator | 2025-11-23 00:54:16.466796 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:54:16.466802 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-11-23 00:54:16.466810 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-23 00:54:16.466823 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-23 00:54:16.466829 | orchestrator | 2025-11-23 00:54:16.466836 | orchestrator | 2025-11-23 00:54:16.466842 | orchestrator | 2025-11-23 00:54:16.466849 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:54:16.466855 | orchestrator | Sunday 23 November 2025 00:54:15 +0000 (0:00:17.146) 0:02:06.378 ******* 2025-11-23 00:54:16.466862 | orchestrator | =============================================================================== 2025-11-23 00:54:16.466869 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.74s 2025-11-23 00:54:16.466875 | orchestrator | generate keys ---------------------------------------------------------- 24.82s 2025-11-23 00:54:16.466882 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.15s 2025-11-23 00:54:16.466888 | orchestrator | get keys from monitors ------------------------------------------------- 11.85s 2025-11-23 00:54:16.466895 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.99s 2025-11-23 00:54:16.466901 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.65s 2025-11-23 00:54:16.466908 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.62s 2025-11-23 00:54:16.466914 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.85s 2025-11-23 00:54:16.466921 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.76s 2025-11-23 00:54:16.466927 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2025-11-23 00:54:16.466934 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.70s 2025-11-23 00:54:16.466940 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.69s 2025-11-23 00:54:16.466947 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.69s 2025-11-23 00:54:16.466953 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2025-11-23 00:54:16.466960 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-11-23 00:54:16.466966 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.63s 2025-11-23 00:54:16.466973 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-11-23 00:54:16.466979 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.57s 2025-11-23 00:54:16.466986 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.57s 2025-11-23 00:54:16.466992 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.56s 2025-11-23 00:54:16.466999 | orchestrator | 2025-11-23 00:54:16 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:16.467005 | orchestrator | 2025-11-23 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:19.505465 | orchestrator | 2025-11-23 00:54:19 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:19.506853 | orchestrator | 2025-11-23 00:54:19 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:19.506893 | orchestrator | 2025-11-23 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:22.544650 | orchestrator | 2025-11-23 00:54:22 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:22.546147 | orchestrator | 2025-11-23 00:54:22 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:22.546185 | orchestrator | 2025-11-23 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:25.589778 | orchestrator | 2025-11-23 00:54:25 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:25.591217 | orchestrator | 2025-11-23 00:54:25 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:25.591418 | orchestrator | 2025-11-23 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:28.628552 | orchestrator | 2025-11-23 00:54:28 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:28.631715 | orchestrator | 2025-11-23 00:54:28 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:28.631778 | orchestrator | 2025-11-23 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:31.677556 | orchestrator | 2025-11-23 00:54:31 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:31.678583 | orchestrator | 2025-11-23 00:54:31 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:31.678771 | orchestrator | 2025-11-23 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:34.725860 | orchestrator | 2025-11-23 00:54:34 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:34.727501 | orchestrator | 2025-11-23 00:54:34 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:34.727543 | orchestrator | 2025-11-23 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:37.770837 | orchestrator | 2025-11-23 00:54:37 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:37.773051 | orchestrator | 2025-11-23 00:54:37 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:37.773517 | orchestrator | 2025-11-23 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:40.823263 | orchestrator | 2025-11-23 00:54:40 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:40.824284 | orchestrator | 2025-11-23 00:54:40 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:40.824812 | orchestrator | 2025-11-23 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:43.859185 | orchestrator | 2025-11-23 00:54:43 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:43.860959 | orchestrator | 2025-11-23 00:54:43 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:43.861180 | orchestrator | 2025-11-23 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:46.897856 | orchestrator | 2025-11-23 00:54:46 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:46.898811 | orchestrator | 2025-11-23 00:54:46 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state STARTED 2025-11-23 00:54:46.898850 | orchestrator | 2025-11-23 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:49.941786 | orchestrator | 2025-11-23 00:54:49 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:49.942881 | orchestrator | 2025-11-23 00:54:49 | INFO  | Task 288f0bd4-640e-45d4-b544-e8629ec48599 is in state SUCCESS 2025-11-23 00:54:49.943120 | orchestrator | 2025-11-23 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:52.991449 | orchestrator | 2025-11-23 00:54:52 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:52.993380 | orchestrator | 2025-11-23 00:54:52 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:54:52.993531 | orchestrator | 2025-11-23 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:56.032201 | orchestrator | 2025-11-23 00:54:56 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:56.032872 | orchestrator | 2025-11-23 00:54:56 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:54:56.033043 | orchestrator | 2025-11-23 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:54:59.072401 | orchestrator | 2025-11-23 00:54:59 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:54:59.074286 | orchestrator | 2025-11-23 00:54:59 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:54:59.074349 | orchestrator | 2025-11-23 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:02.114845 | orchestrator | 2025-11-23 00:55:02 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:02.115992 | orchestrator | 2025-11-23 00:55:02 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:02.116026 | orchestrator | 2025-11-23 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:05.152465 | orchestrator | 2025-11-23 00:55:05 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:05.154850 | orchestrator | 2025-11-23 00:55:05 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:05.154908 | orchestrator | 2025-11-23 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:08.200139 | orchestrator | 2025-11-23 00:55:08 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:08.201087 | orchestrator | 2025-11-23 00:55:08 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:08.201116 | orchestrator | 2025-11-23 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:11.243856 | orchestrator | 2025-11-23 00:55:11 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:11.244777 | orchestrator | 2025-11-23 00:55:11 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:11.244865 | orchestrator | 2025-11-23 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:14.282624 | orchestrator | 2025-11-23 00:55:14 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:14.282745 | orchestrator | 2025-11-23 00:55:14 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:14.282763 | orchestrator | 2025-11-23 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:17.316240 | orchestrator | 2025-11-23 00:55:17 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:17.318510 | orchestrator | 2025-11-23 00:55:17 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:17.318588 | orchestrator | 2025-11-23 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:20.357675 | orchestrator | 2025-11-23 00:55:20 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:20.359916 | orchestrator | 2025-11-23 00:55:20 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:20.360368 | orchestrator | 2025-11-23 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:23.407099 | orchestrator | 2025-11-23 00:55:23 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:23.409286 | orchestrator | 2025-11-23 00:55:23 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:23.409425 | orchestrator | 2025-11-23 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:26.451807 | orchestrator | 2025-11-23 00:55:26 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:26.453239 | orchestrator | 2025-11-23 00:55:26 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:26.453272 | orchestrator | 2025-11-23 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:29.499227 | orchestrator | 2025-11-23 00:55:29 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:29.500495 | orchestrator | 2025-11-23 00:55:29 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:29.500530 | orchestrator | 2025-11-23 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:32.544830 | orchestrator | 2025-11-23 00:55:32 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:32.545932 | orchestrator | 2025-11-23 00:55:32 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:32.545985 | orchestrator | 2025-11-23 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:35.588924 | orchestrator | 2025-11-23 00:55:35 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:35.590747 | orchestrator | 2025-11-23 00:55:35 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:35.590801 | orchestrator | 2025-11-23 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:38.635580 | orchestrator | 2025-11-23 00:55:38 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:38.636113 | orchestrator | 2025-11-23 00:55:38 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:38.636156 | orchestrator | 2025-11-23 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:41.676815 | orchestrator | 2025-11-23 00:55:41 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:41.678413 | orchestrator | 2025-11-23 00:55:41 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:41.678460 | orchestrator | 2025-11-23 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:44.730538 | orchestrator | 2025-11-23 00:55:44 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:44.730645 | orchestrator | 2025-11-23 00:55:44 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state STARTED 2025-11-23 00:55:44.730661 | orchestrator | 2025-11-23 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:47.770561 | orchestrator | 2025-11-23 00:55:47 | INFO  | Task f80de0b0-4f3f-4d0a-b688-d6bd55a7cbc8 is in state STARTED 2025-11-23 00:55:47.773546 | orchestrator | 2025-11-23 00:55:47 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:55:47.774764 | orchestrator | 2025-11-23 00:55:47 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:55:47.776194 | orchestrator | 2025-11-23 00:55:47 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:47.779463 | orchestrator | 2025-11-23 00:55:47 | INFO  | Task 6c382dd0-1501-46c0-9ea3-4a421d046e10 is in state SUCCESS 2025-11-23 00:55:47.780658 | orchestrator | 2025-11-23 00:55:47.780689 | orchestrator | 2025-11-23 00:55:47.780701 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-11-23 00:55:47.780740 | orchestrator | 2025-11-23 00:55:47.780752 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-11-23 00:55:47.780764 | orchestrator | Sunday 23 November 2025 00:54:19 +0000 (0:00:00.142) 0:00:00.142 ******* 2025-11-23 00:55:47.780775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-23 00:55:47.780789 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.780809 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.780828 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-23 00:55:47.780847 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.780866 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-23 00:55:47.780886 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-23 00:55:47.780901 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-23 00:55:47.780912 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-23 00:55:47.780923 | orchestrator | 2025-11-23 00:55:47.780934 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-11-23 00:55:47.780945 | orchestrator | Sunday 23 November 2025 00:54:23 +0000 (0:00:04.411) 0:00:04.553 ******* 2025-11-23 00:55:47.780955 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-23 00:55:47.780966 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.780976 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.780987 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-23 00:55:47.780998 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.781009 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-23 00:55:47.781019 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-23 00:55:47.781030 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-23 00:55:47.781040 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-23 00:55:47.781051 | orchestrator | 2025-11-23 00:55:47.781062 | orchestrator | TASK [Create share directory] ************************************************** 2025-11-23 00:55:47.781072 | orchestrator | Sunday 23 November 2025 00:54:27 +0000 (0:00:04.213) 0:00:08.766 ******* 2025-11-23 00:55:47.781084 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-23 00:55:47.781095 | orchestrator | 2025-11-23 00:55:47.781106 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-11-23 00:55:47.781132 | orchestrator | Sunday 23 November 2025 00:54:28 +0000 (0:00:00.930) 0:00:09.696 ******* 2025-11-23 00:55:47.781143 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-11-23 00:55:47.781154 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.781165 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.781176 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-11-23 00:55:47.781186 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.781197 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-11-23 00:55:47.781217 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-11-23 00:55:47.781228 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-11-23 00:55:47.781238 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-11-23 00:55:47.781249 | orchestrator | 2025-11-23 00:55:47.781262 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-11-23 00:55:47.781273 | orchestrator | Sunday 23 November 2025 00:54:40 +0000 (0:00:11.602) 0:00:21.299 ******* 2025-11-23 00:55:47.781285 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-11-23 00:55:47.781347 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-11-23 00:55:47.781361 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-23 00:55:47.781374 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-23 00:55:47.781410 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-23 00:55:47.781432 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-23 00:55:47.781445 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-11-23 00:55:47.781457 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-11-23 00:55:47.781470 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-11-23 00:55:47.781483 | orchestrator | 2025-11-23 00:55:47.781494 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-11-23 00:55:47.781507 | orchestrator | Sunday 23 November 2025 00:54:42 +0000 (0:00:02.668) 0:00:23.968 ******* 2025-11-23 00:55:47.781520 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-11-23 00:55:47.781533 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.781545 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.781556 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-11-23 00:55:47.781568 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-23 00:55:47.781580 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-11-23 00:55:47.781593 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-11-23 00:55:47.781605 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-11-23 00:55:47.781617 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-11-23 00:55:47.781628 | orchestrator | 2025-11-23 00:55:47.781638 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:55:47.781650 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:55:47.781662 | orchestrator | 2025-11-23 00:55:47.781673 | orchestrator | 2025-11-23 00:55:47.781684 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:55:47.781694 | orchestrator | Sunday 23 November 2025 00:54:49 +0000 (0:00:06.275) 0:00:30.243 ******* 2025-11-23 00:55:47.781705 | orchestrator | =============================================================================== 2025-11-23 00:55:47.781716 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.60s 2025-11-23 00:55:47.781726 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.28s 2025-11-23 00:55:47.781737 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.41s 2025-11-23 00:55:47.781770 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.21s 2025-11-23 00:55:47.781781 | orchestrator | Check if target directories exist --------------------------------------- 2.67s 2025-11-23 00:55:47.781792 | orchestrator | Create share directory -------------------------------------------------- 0.93s 2025-11-23 00:55:47.781802 | orchestrator | 2025-11-23 00:55:47.781813 | orchestrator | 2025-11-23 00:55:47.781824 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-11-23 00:55:47.781835 | orchestrator | 2025-11-23 00:55:47.781845 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-11-23 00:55:47.781856 | orchestrator | Sunday 23 November 2025 00:54:53 +0000 (0:00:00.209) 0:00:00.209 ******* 2025-11-23 00:55:47.781874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-11-23 00:55:47.781886 | orchestrator | 2025-11-23 00:55:47.781897 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-11-23 00:55:47.781908 | orchestrator | Sunday 23 November 2025 00:54:53 +0000 (0:00:00.204) 0:00:00.413 ******* 2025-11-23 00:55:47.781919 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-11-23 00:55:47.781929 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-11-23 00:55:47.781940 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-11-23 00:55:47.781951 | orchestrator | 2025-11-23 00:55:47.781961 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-11-23 00:55:47.781972 | orchestrator | Sunday 23 November 2025 00:54:54 +0000 (0:00:01.146) 0:00:01.559 ******* 2025-11-23 00:55:47.781983 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-11-23 00:55:47.781994 | orchestrator | 2025-11-23 00:55:47.782004 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-11-23 00:55:47.782064 | orchestrator | Sunday 23 November 2025 00:54:55 +0000 (0:00:01.305) 0:00:02.865 ******* 2025-11-23 00:55:47.782077 | orchestrator | changed: [testbed-manager] 2025-11-23 00:55:47.782088 | orchestrator | 2025-11-23 00:55:47.782098 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-11-23 00:55:47.782109 | orchestrator | Sunday 23 November 2025 00:54:56 +0000 (0:00:00.814) 0:00:03.680 ******* 2025-11-23 00:55:47.782120 | orchestrator | changed: [testbed-manager] 2025-11-23 00:55:47.782130 | orchestrator | 2025-11-23 00:55:47.782141 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-11-23 00:55:47.782151 | orchestrator | Sunday 23 November 2025 00:54:57 +0000 (0:00:00.809) 0:00:04.489 ******* 2025-11-23 00:55:47.782162 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-11-23 00:55:47.782173 | orchestrator | ok: [testbed-manager] 2025-11-23 00:55:47.782183 | orchestrator | 2025-11-23 00:55:47.782194 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-11-23 00:55:47.782212 | orchestrator | Sunday 23 November 2025 00:55:37 +0000 (0:00:39.918) 0:00:44.408 ******* 2025-11-23 00:55:47.782224 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-11-23 00:55:47.782235 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-11-23 00:55:47.782246 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-11-23 00:55:47.782256 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-11-23 00:55:47.782267 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-11-23 00:55:47.782277 | orchestrator | 2025-11-23 00:55:47.782288 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-11-23 00:55:47.782326 | orchestrator | Sunday 23 November 2025 00:55:41 +0000 (0:00:03.783) 0:00:48.191 ******* 2025-11-23 00:55:47.782338 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-11-23 00:55:47.782349 | orchestrator | 2025-11-23 00:55:47.782360 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-11-23 00:55:47.782379 | orchestrator | Sunday 23 November 2025 00:55:41 +0000 (0:00:00.423) 0:00:48.615 ******* 2025-11-23 00:55:47.782390 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:55:47.782401 | orchestrator | 2025-11-23 00:55:47.782412 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-11-23 00:55:47.782423 | orchestrator | Sunday 23 November 2025 00:55:41 +0000 (0:00:00.119) 0:00:48.734 ******* 2025-11-23 00:55:47.782434 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:55:47.782444 | orchestrator | 2025-11-23 00:55:47.782455 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-11-23 00:55:47.782466 | orchestrator | Sunday 23 November 2025 00:55:42 +0000 (0:00:00.402) 0:00:49.136 ******* 2025-11-23 00:55:47.782476 | orchestrator | changed: [testbed-manager] 2025-11-23 00:55:47.782487 | orchestrator | 2025-11-23 00:55:47.782498 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-11-23 00:55:47.782509 | orchestrator | Sunday 23 November 2025 00:55:43 +0000 (0:00:01.298) 0:00:50.435 ******* 2025-11-23 00:55:47.782519 | orchestrator | changed: [testbed-manager] 2025-11-23 00:55:47.782530 | orchestrator | 2025-11-23 00:55:47.782541 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-11-23 00:55:47.782552 | orchestrator | Sunday 23 November 2025 00:55:44 +0000 (0:00:00.710) 0:00:51.145 ******* 2025-11-23 00:55:47.782562 | orchestrator | changed: [testbed-manager] 2025-11-23 00:55:47.782573 | orchestrator | 2025-11-23 00:55:47.782584 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-11-23 00:55:47.782594 | orchestrator | Sunday 23 November 2025 00:55:44 +0000 (0:00:00.575) 0:00:51.721 ******* 2025-11-23 00:55:47.782605 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-11-23 00:55:47.782616 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-11-23 00:55:47.782627 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-11-23 00:55:47.782638 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-11-23 00:55:47.782649 | orchestrator | 2025-11-23 00:55:47.782660 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:55:47.782671 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-23 00:55:47.782682 | orchestrator | 2025-11-23 00:55:47.782692 | orchestrator | 2025-11-23 00:55:47.782703 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:55:47.782714 | orchestrator | Sunday 23 November 2025 00:55:46 +0000 (0:00:01.340) 0:00:53.061 ******* 2025-11-23 00:55:47.782724 | orchestrator | =============================================================================== 2025-11-23 00:55:47.782735 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.92s 2025-11-23 00:55:47.782751 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.78s 2025-11-23 00:55:47.782762 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.34s 2025-11-23 00:55:47.782773 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.31s 2025-11-23 00:55:47.782784 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.30s 2025-11-23 00:55:47.782795 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.15s 2025-11-23 00:55:47.782806 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2025-11-23 00:55:47.782817 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2025-11-23 00:55:47.782827 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2025-11-23 00:55:47.782838 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-11-23 00:55:47.782849 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2025-11-23 00:55:47.782859 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.40s 2025-11-23 00:55:47.782870 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-11-23 00:55:47.782889 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-11-23 00:55:47.782900 | orchestrator | 2025-11-23 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:50.838385 | orchestrator | 2025-11-23 00:55:50 | INFO  | Task f80de0b0-4f3f-4d0a-b688-d6bd55a7cbc8 is in state STARTED 2025-11-23 00:55:50.838496 | orchestrator | 2025-11-23 00:55:50 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:55:50.838511 | orchestrator | 2025-11-23 00:55:50 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:55:50.838523 | orchestrator | 2025-11-23 00:55:50 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:50.838535 | orchestrator | 2025-11-23 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:53.872285 | orchestrator | 2025-11-23 00:55:53 | INFO  | Task f80de0b0-4f3f-4d0a-b688-d6bd55a7cbc8 is in state SUCCESS 2025-11-23 00:55:53.872762 | orchestrator | 2025-11-23 00:55:53 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:55:53.873614 | orchestrator | 2025-11-23 00:55:53 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:55:53.874437 | orchestrator | 2025-11-23 00:55:53 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:55:53.875674 | orchestrator | 2025-11-23 00:55:53 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:53.876863 | orchestrator | 2025-11-23 00:55:53 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:55:53.877022 | orchestrator | 2025-11-23 00:55:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:56.922664 | orchestrator | 2025-11-23 00:55:56 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:55:56.923914 | orchestrator | 2025-11-23 00:55:56 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:55:56.924773 | orchestrator | 2025-11-23 00:55:56 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:55:56.926758 | orchestrator | 2025-11-23 00:55:56 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:56.928552 | orchestrator | 2025-11-23 00:55:56 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:55:56.928596 | orchestrator | 2025-11-23 00:55:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:55:59.954429 | orchestrator | 2025-11-23 00:55:59 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:55:59.954635 | orchestrator | 2025-11-23 00:55:59 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:55:59.955562 | orchestrator | 2025-11-23 00:55:59 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:55:59.956692 | orchestrator | 2025-11-23 00:55:59 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:55:59.960457 | orchestrator | 2025-11-23 00:55:59 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:55:59.960495 | orchestrator | 2025-11-23 00:55:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:02.993511 | orchestrator | 2025-11-23 00:56:02 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:02.995317 | orchestrator | 2025-11-23 00:56:02 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:02.996967 | orchestrator | 2025-11-23 00:56:02 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:02.998931 | orchestrator | 2025-11-23 00:56:02 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:56:03.000374 | orchestrator | 2025-11-23 00:56:02 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:03.000837 | orchestrator | 2025-11-23 00:56:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:06.038068 | orchestrator | 2025-11-23 00:56:06 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:06.039220 | orchestrator | 2025-11-23 00:56:06 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:06.041092 | orchestrator | 2025-11-23 00:56:06 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:06.042531 | orchestrator | 2025-11-23 00:56:06 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:56:06.043040 | orchestrator | 2025-11-23 00:56:06 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:06.043067 | orchestrator | 2025-11-23 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:09.070595 | orchestrator | 2025-11-23 00:56:09 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:09.070856 | orchestrator | 2025-11-23 00:56:09 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:09.071790 | orchestrator | 2025-11-23 00:56:09 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:09.072510 | orchestrator | 2025-11-23 00:56:09 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:56:09.074483 | orchestrator | 2025-11-23 00:56:09 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:09.074581 | orchestrator | 2025-11-23 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:12.113223 | orchestrator | 2025-11-23 00:56:12 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:12.114845 | orchestrator | 2025-11-23 00:56:12 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:12.115833 | orchestrator | 2025-11-23 00:56:12 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:12.116855 | orchestrator | 2025-11-23 00:56:12 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state STARTED 2025-11-23 00:56:12.117652 | orchestrator | 2025-11-23 00:56:12 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:12.117681 | orchestrator | 2025-11-23 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:15.149137 | orchestrator | 2025-11-23 00:56:15 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:15.149220 | orchestrator | 2025-11-23 00:56:15 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:15.149824 | orchestrator | 2025-11-23 00:56:15 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:15.150645 | orchestrator | 2025-11-23 00:56:15 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:15.152536 | orchestrator | 2025-11-23 00:56:15 | INFO  | Task 7c5c5062-b130-4dda-b765-258a014bda17 is in state SUCCESS 2025-11-23 00:56:15.154198 | orchestrator | 2025-11-23 00:56:15.154231 | orchestrator | 2025-11-23 00:56:15.154240 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:56:15.154270 | orchestrator | 2025-11-23 00:56:15.154278 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:56:15.154285 | orchestrator | Sunday 23 November 2025 00:55:50 +0000 (0:00:00.155) 0:00:00.155 ******* 2025-11-23 00:56:15.154340 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.154351 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.154359 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.154366 | orchestrator | 2025-11-23 00:56:15.154373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:56:15.154381 | orchestrator | Sunday 23 November 2025 00:55:50 +0000 (0:00:00.280) 0:00:00.436 ******* 2025-11-23 00:56:15.154388 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-23 00:56:15.154396 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-23 00:56:15.154403 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-23 00:56:15.154410 | orchestrator | 2025-11-23 00:56:15.154418 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-11-23 00:56:15.154425 | orchestrator | 2025-11-23 00:56:15.154504 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-11-23 00:56:15.154520 | orchestrator | Sunday 23 November 2025 00:55:51 +0000 (0:00:00.709) 0:00:01.145 ******* 2025-11-23 00:56:15.154531 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.154538 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.154545 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.154552 | orchestrator | 2025-11-23 00:56:15.154559 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:56:15.154567 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:56:15.154575 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:56:15.154582 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:56:15.154589 | orchestrator | 2025-11-23 00:56:15.154596 | orchestrator | 2025-11-23 00:56:15.154603 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:56:15.154610 | orchestrator | Sunday 23 November 2025 00:55:51 +0000 (0:00:00.715) 0:00:01.861 ******* 2025-11-23 00:56:15.154617 | orchestrator | =============================================================================== 2025-11-23 00:56:15.154625 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-11-23 00:56:15.154632 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-11-23 00:56:15.154639 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-11-23 00:56:15.154646 | orchestrator | 2025-11-23 00:56:15.154653 | orchestrator | 2025-11-23 00:56:15.154660 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:56:15.154667 | orchestrator | 2025-11-23 00:56:15.154674 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:56:15.154681 | orchestrator | Sunday 23 November 2025 00:53:10 +0000 (0:00:00.239) 0:00:00.239 ******* 2025-11-23 00:56:15.154688 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.154695 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.154702 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.154709 | orchestrator | 2025-11-23 00:56:15.154716 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:56:15.154723 | orchestrator | Sunday 23 November 2025 00:53:10 +0000 (0:00:00.246) 0:00:00.486 ******* 2025-11-23 00:56:15.154730 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-23 00:56:15.154738 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-23 00:56:15.154745 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-23 00:56:15.154752 | orchestrator | 2025-11-23 00:56:15.154759 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-11-23 00:56:15.154772 | orchestrator | 2025-11-23 00:56:15.154779 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-23 00:56:15.154786 | orchestrator | Sunday 23 November 2025 00:53:11 +0000 (0:00:00.339) 0:00:00.825 ******* 2025-11-23 00:56:15.154795 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:56:15.154803 | orchestrator | 2025-11-23 00:56:15.154811 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-11-23 00:56:15.154822 | orchestrator | Sunday 23 November 2025 00:53:11 +0000 (0:00:00.505) 0:00:01.331 ******* 2025-11-23 00:56:15.154859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.154887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.154902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.154915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.154936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.154948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.154969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.154988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155015 | orchestrator | 2025-11-23 00:56:15.155029 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-11-23 00:56:15.155038 | orchestrator | Sunday 23 November 2025 00:53:13 +0000 (0:00:01.865) 0:00:03.197 ******* 2025-11-23 00:56:15.155047 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-11-23 00:56:15.155055 | orchestrator | 2025-11-23 00:56:15.155063 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-11-23 00:56:15.155077 | orchestrator | Sunday 23 November 2025 00:53:14 +0000 (0:00:00.754) 0:00:03.951 ******* 2025-11-23 00:56:15.155088 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.155103 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.155120 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.155131 | orchestrator | 2025-11-23 00:56:15.155143 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-11-23 00:56:15.155155 | orchestrator | Sunday 23 November 2025 00:53:14 +0000 (0:00:00.378) 0:00:04.329 ******* 2025-11-23 00:56:15.155166 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:56:15.155178 | orchestrator | 2025-11-23 00:56:15.155190 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-23 00:56:15.155201 | orchestrator | Sunday 23 November 2025 00:53:15 +0000 (0:00:00.613) 0:00:04.943 ******* 2025-11-23 00:56:15.155213 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:56:15.155224 | orchestrator | 2025-11-23 00:56:15.155235 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-11-23 00:56:15.155247 | orchestrator | Sunday 23 November 2025 00:53:15 +0000 (0:00:00.488) 0:00:05.431 ******* 2025-11-23 00:56:15.155260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155417 | orchestrator | 2025-11-23 00:56:15.155424 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-11-23 00:56:15.155432 | orchestrator | Sunday 23 November 2025 00:53:19 +0000 (0:00:03.297) 0:00:08.729 ******* 2025-11-23 00:56:15.155439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:56:15.155447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.155455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:56:15.155462 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.155476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:56:15.155489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.155502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:56:15.155509 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.155517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:56:15.155525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.155536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:56:15.155543 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.155551 | orchestrator | 2025-11-23 00:56:15.155558 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-11-23 00:56:15.155565 | orchestrator | Sunday 23 November 2025 00:53:19 +0000 (0:00:00.622) 0:00:09.351 ******* 2025-11-23 00:56:15.155576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:56:15.155593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.155600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:56:15.155608 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.155615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:56:15.155629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.155637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:56:15.155652 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.155663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-23 00:56:15.155671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.155679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-23 00:56:15.155686 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.155698 | orchestrator | 2025-11-23 00:56:15.155714 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-11-23 00:56:15.155729 | orchestrator | Sunday 23 November 2025 00:53:20 +0000 (0:00:00.698) 0:00:10.049 ******* 2025-11-23 00:56:15.155751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.155907 | orchestrator | 2025-11-23 00:56:15.155919 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-11-23 00:56:15.155927 | orchestrator | Sunday 23 November 2025 00:53:23 +0000 (0:00:02.967) 0:00:13.017 ******* 2025-11-23 00:56:15.155935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.155965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.155991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.156005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.156018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.156030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.156041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.156065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.156077 | orchestrator | 2025-11-23 00:56:15.156089 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-11-23 00:56:15.156101 | orchestrator | Sunday 23 November 2025 00:53:27 +0000 (0:00:04.538) 0:00:17.556 ******* 2025-11-23 00:56:15.156112 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.156123 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:56:15.156134 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:56:15.156145 | orchestrator | 2025-11-23 00:56:15.156155 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-11-23 00:56:15.156167 | orchestrator | Sunday 23 November 2025 00:53:29 +0000 (0:00:01.322) 0:00:18.878 ******* 2025-11-23 00:56:15.156183 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.156195 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.156208 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.156220 | orchestrator | 2025-11-23 00:56:15.156232 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-11-23 00:56:15.156244 | orchestrator | Sunday 23 November 2025 00:53:29 +0000 (0:00:00.467) 0:00:19.346 ******* 2025-11-23 00:56:15.156256 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.156266 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.156277 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.156287 | orchestrator | 2025-11-23 00:56:15.156330 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-11-23 00:56:15.156343 | orchestrator | Sunday 23 November 2025 00:53:29 +0000 (0:00:00.274) 0:00:19.621 ******* 2025-11-23 00:56:15.156355 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.156367 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.156380 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.156390 | orchestrator | 2025-11-23 00:56:15.156401 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-11-23 00:56:15.156411 | orchestrator | Sunday 23 November 2025 00:53:30 +0000 (0:00:00.400) 0:00:20.021 ******* 2025-11-23 00:56:15.156423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.156438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.156471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.156485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.156504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.156518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-23 00:56:15.156531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.156552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.156565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.156573 | orchestrator | 2025-11-23 00:56:15.156580 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-23 00:56:15.156588 | orchestrator | Sunday 23 November 2025 00:53:32 +0000 (0:00:02.270) 0:00:22.291 ******* 2025-11-23 00:56:15.156595 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.156602 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.156609 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.156616 | orchestrator | 2025-11-23 00:56:15.156623 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-11-23 00:56:15.156630 | orchestrator | Sunday 23 November 2025 00:53:32 +0000 (0:00:00.281) 0:00:22.573 ******* 2025-11-23 00:56:15.156637 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-23 00:56:15.156646 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-23 00:56:15.156657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-23 00:56:15.156664 | orchestrator | 2025-11-23 00:56:15.156671 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-11-23 00:56:15.156678 | orchestrator | Sunday 23 November 2025 00:53:34 +0000 (0:00:01.442) 0:00:24.016 ******* 2025-11-23 00:56:15.156685 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:56:15.156692 | orchestrator | 2025-11-23 00:56:15.156699 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-11-23 00:56:15.156707 | orchestrator | Sunday 23 November 2025 00:53:35 +0000 (0:00:00.803) 0:00:24.819 ******* 2025-11-23 00:56:15.156714 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.156721 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.156728 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.156735 | orchestrator | 2025-11-23 00:56:15.156742 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-11-23 00:56:15.156749 | orchestrator | Sunday 23 November 2025 00:53:35 +0000 (0:00:00.645) 0:00:25.465 ******* 2025-11-23 00:56:15.156756 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-23 00:56:15.156763 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:56:15.156770 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-23 00:56:15.156777 | orchestrator | 2025-11-23 00:56:15.156785 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-11-23 00:56:15.156792 | orchestrator | Sunday 23 November 2025 00:53:36 +0000 (0:00:00.930) 0:00:26.395 ******* 2025-11-23 00:56:15.156808 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.156816 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.156823 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.156830 | orchestrator | 2025-11-23 00:56:15.156837 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-11-23 00:56:15.156844 | orchestrator | Sunday 23 November 2025 00:53:37 +0000 (0:00:00.269) 0:00:26.665 ******* 2025-11-23 00:56:15.156851 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-23 00:56:15.156858 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-23 00:56:15.156865 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-23 00:56:15.156872 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-23 00:56:15.156879 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-23 00:56:15.156886 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-23 00:56:15.156893 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-23 00:56:15.156900 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-23 00:56:15.156908 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-23 00:56:15.156915 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-23 00:56:15.156922 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-23 00:56:15.156929 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-23 00:56:15.156936 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-23 00:56:15.156943 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-23 00:56:15.156950 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-23 00:56:15.156957 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-23 00:56:15.156964 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-23 00:56:15.156972 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-23 00:56:15.156983 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-23 00:56:15.156991 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-23 00:56:15.156997 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-23 00:56:15.157005 | orchestrator | 2025-11-23 00:56:15.157012 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-11-23 00:56:15.157019 | orchestrator | Sunday 23 November 2025 00:53:45 +0000 (0:00:08.636) 0:00:35.301 ******* 2025-11-23 00:56:15.157026 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-23 00:56:15.157033 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-23 00:56:15.157041 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-23 00:56:15.157048 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-23 00:56:15.157055 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-23 00:56:15.157065 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-23 00:56:15.157077 | orchestrator | 2025-11-23 00:56:15.157085 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-11-23 00:56:15.157092 | orchestrator | Sunday 23 November 2025 00:53:48 +0000 (0:00:02.698) 0:00:37.999 ******* 2025-11-23 00:56:15.157099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.157107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.157120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-23 00:56:15.157129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.157140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.157152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-23 00:56:15.157160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.157168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.157175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-23 00:56:15.157183 | orchestrator | 2025-11-23 00:56:15.157190 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-23 00:56:15.157197 | orchestrator | Sunday 23 November 2025 00:53:50 +0000 (0:00:02.254) 0:00:40.253 ******* 2025-11-23 00:56:15.157204 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.157211 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.157218 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.157226 | orchestrator | 2025-11-23 00:56:15.157237 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-11-23 00:56:15.157244 | orchestrator | Sunday 23 November 2025 00:53:50 +0000 (0:00:00.267) 0:00:40.521 ******* 2025-11-23 00:56:15.157252 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157259 | orchestrator | 2025-11-23 00:56:15.157266 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-11-23 00:56:15.157273 | orchestrator | Sunday 23 November 2025 00:53:53 +0000 (0:00:02.361) 0:00:42.883 ******* 2025-11-23 00:56:15.157284 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157291 | orchestrator | 2025-11-23 00:56:15.157337 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-11-23 00:56:15.157345 | orchestrator | Sunday 23 November 2025 00:53:55 +0000 (0:00:02.314) 0:00:45.197 ******* 2025-11-23 00:56:15.157352 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.157359 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.157367 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.157374 | orchestrator | 2025-11-23 00:56:15.157381 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-11-23 00:56:15.157388 | orchestrator | Sunday 23 November 2025 00:53:56 +0000 (0:00:00.950) 0:00:46.148 ******* 2025-11-23 00:56:15.157395 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.157402 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.157409 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.157416 | orchestrator | 2025-11-23 00:56:15.157423 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-11-23 00:56:15.157435 | orchestrator | Sunday 23 November 2025 00:53:56 +0000 (0:00:00.263) 0:00:46.411 ******* 2025-11-23 00:56:15.157442 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.157449 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.157457 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.157464 | orchestrator | 2025-11-23 00:56:15.157471 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-11-23 00:56:15.157478 | orchestrator | Sunday 23 November 2025 00:53:57 +0000 (0:00:00.313) 0:00:46.725 ******* 2025-11-23 00:56:15.157485 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157492 | orchestrator | 2025-11-23 00:56:15.157499 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-11-23 00:56:15.157506 | orchestrator | Sunday 23 November 2025 00:54:12 +0000 (0:00:14.970) 0:01:01.696 ******* 2025-11-23 00:56:15.157513 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157520 | orchestrator | 2025-11-23 00:56:15.157527 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-23 00:56:15.157534 | orchestrator | Sunday 23 November 2025 00:54:22 +0000 (0:00:10.888) 0:01:12.584 ******* 2025-11-23 00:56:15.157541 | orchestrator | 2025-11-23 00:56:15.157548 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-23 00:56:15.157556 | orchestrator | Sunday 23 November 2025 00:54:22 +0000 (0:00:00.058) 0:01:12.642 ******* 2025-11-23 00:56:15.157563 | orchestrator | 2025-11-23 00:56:15.157570 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-23 00:56:15.157577 | orchestrator | Sunday 23 November 2025 00:54:23 +0000 (0:00:00.059) 0:01:12.702 ******* 2025-11-23 00:56:15.157584 | orchestrator | 2025-11-23 00:56:15.157591 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-11-23 00:56:15.157598 | orchestrator | Sunday 23 November 2025 00:54:23 +0000 (0:00:00.061) 0:01:12.764 ******* 2025-11-23 00:56:15.157605 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157612 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:56:15.157619 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:56:15.157626 | orchestrator | 2025-11-23 00:56:15.157633 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-11-23 00:56:15.157640 | orchestrator | Sunday 23 November 2025 00:55:09 +0000 (0:00:46.267) 0:01:59.031 ******* 2025-11-23 00:56:15.157647 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157654 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:56:15.157661 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:56:15.157668 | orchestrator | 2025-11-23 00:56:15.157675 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-11-23 00:56:15.157682 | orchestrator | Sunday 23 November 2025 00:55:14 +0000 (0:00:04.676) 0:02:03.708 ******* 2025-11-23 00:56:15.157690 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:56:15.157697 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157709 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:56:15.157716 | orchestrator | 2025-11-23 00:56:15.157723 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-23 00:56:15.157730 | orchestrator | Sunday 23 November 2025 00:55:21 +0000 (0:00:07.653) 0:02:11.361 ******* 2025-11-23 00:56:15.157738 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:56:15.157745 | orchestrator | 2025-11-23 00:56:15.157752 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-11-23 00:56:15.157759 | orchestrator | Sunday 23 November 2025 00:55:22 +0000 (0:00:00.571) 0:02:11.932 ******* 2025-11-23 00:56:15.157766 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:56:15.157773 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.157781 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:56:15.157788 | orchestrator | 2025-11-23 00:56:15.157795 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-11-23 00:56:15.157802 | orchestrator | Sunday 23 November 2025 00:55:23 +0000 (0:00:00.717) 0:02:12.650 ******* 2025-11-23 00:56:15.157809 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:56:15.157816 | orchestrator | 2025-11-23 00:56:15.157823 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-11-23 00:56:15.157830 | orchestrator | Sunday 23 November 2025 00:55:24 +0000 (0:00:01.730) 0:02:14.381 ******* 2025-11-23 00:56:15.157837 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-11-23 00:56:15.157844 | orchestrator | 2025-11-23 00:56:15.157851 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-11-23 00:56:15.157858 | orchestrator | Sunday 23 November 2025 00:55:36 +0000 (0:00:11.803) 0:02:26.184 ******* 2025-11-23 00:56:15.157865 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-11-23 00:56:15.157872 | orchestrator | 2025-11-23 00:56:15.157885 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-11-23 00:56:15.157893 | orchestrator | Sunday 23 November 2025 00:56:01 +0000 (0:00:25.456) 0:02:51.641 ******* 2025-11-23 00:56:15.157900 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-11-23 00:56:15.157907 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-11-23 00:56:15.157914 | orchestrator | 2025-11-23 00:56:15.157921 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-11-23 00:56:15.157928 | orchestrator | Sunday 23 November 2025 00:56:09 +0000 (0:00:07.048) 0:02:58.690 ******* 2025-11-23 00:56:15.157936 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.157943 | orchestrator | 2025-11-23 00:56:15.157950 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-11-23 00:56:15.157957 | orchestrator | Sunday 23 November 2025 00:56:09 +0000 (0:00:00.181) 0:02:58.871 ******* 2025-11-23 00:56:15.157964 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.157971 | orchestrator | 2025-11-23 00:56:15.157978 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-11-23 00:56:15.157985 | orchestrator | Sunday 23 November 2025 00:56:09 +0000 (0:00:00.115) 0:02:58.987 ******* 2025-11-23 00:56:15.157992 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.157999 | orchestrator | 2025-11-23 00:56:15.158010 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-11-23 00:56:15.158068 | orchestrator | Sunday 23 November 2025 00:56:09 +0000 (0:00:00.111) 0:02:59.098 ******* 2025-11-23 00:56:15.158076 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.158083 | orchestrator | 2025-11-23 00:56:15.158090 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-11-23 00:56:15.158097 | orchestrator | Sunday 23 November 2025 00:56:09 +0000 (0:00:00.455) 0:02:59.553 ******* 2025-11-23 00:56:15.158104 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:56:15.158111 | orchestrator | 2025-11-23 00:56:15.158118 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-23 00:56:15.158131 | orchestrator | Sunday 23 November 2025 00:56:13 +0000 (0:00:03.495) 0:03:03.048 ******* 2025-11-23 00:56:15.158138 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:56:15.158146 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:56:15.158153 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:56:15.158160 | orchestrator | 2025-11-23 00:56:15.158167 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:56:15.158175 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-11-23 00:56:15.158184 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-23 00:56:15.158191 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-23 00:56:15.158198 | orchestrator | 2025-11-23 00:56:15.158205 | orchestrator | 2025-11-23 00:56:15.158212 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:56:15.158219 | orchestrator | Sunday 23 November 2025 00:56:13 +0000 (0:00:00.377) 0:03:03.426 ******* 2025-11-23 00:56:15.158226 | orchestrator | =============================================================================== 2025-11-23 00:56:15.158233 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 46.27s 2025-11-23 00:56:15.158240 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.46s 2025-11-23 00:56:15.158247 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.97s 2025-11-23 00:56:15.158254 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.80s 2025-11-23 00:56:15.158261 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.89s 2025-11-23 00:56:15.158268 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.64s 2025-11-23 00:56:15.158275 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.65s 2025-11-23 00:56:15.158282 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.05s 2025-11-23 00:56:15.158288 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.68s 2025-11-23 00:56:15.158313 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.54s 2025-11-23 00:56:15.158326 | orchestrator | keystone : Creating default user role ----------------------------------- 3.50s 2025-11-23 00:56:15.158338 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.30s 2025-11-23 00:56:15.158351 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.97s 2025-11-23 00:56:15.158363 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.70s 2025-11-23 00:56:15.158372 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.36s 2025-11-23 00:56:15.158380 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.31s 2025-11-23 00:56:15.158387 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.27s 2025-11-23 00:56:15.158394 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2025-11-23 00:56:15.158401 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.87s 2025-11-23 00:56:15.158408 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.73s 2025-11-23 00:56:15.158421 | orchestrator | 2025-11-23 00:56:15 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:15.158429 | orchestrator | 2025-11-23 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:18.192665 | orchestrator | 2025-11-23 00:56:18 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:18.192840 | orchestrator | 2025-11-23 00:56:18 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:18.193171 | orchestrator | 2025-11-23 00:56:18 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:18.194237 | orchestrator | 2025-11-23 00:56:18 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:18.194584 | orchestrator | 2025-11-23 00:56:18 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:18.194616 | orchestrator | 2025-11-23 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:21.244152 | orchestrator | 2025-11-23 00:56:21 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:21.244277 | orchestrator | 2025-11-23 00:56:21 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:21.244288 | orchestrator | 2025-11-23 00:56:21 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:21.244317 | orchestrator | 2025-11-23 00:56:21 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:21.244323 | orchestrator | 2025-11-23 00:56:21 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:21.244331 | orchestrator | 2025-11-23 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:24.256650 | orchestrator | 2025-11-23 00:56:24 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:24.256864 | orchestrator | 2025-11-23 00:56:24 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:24.258290 | orchestrator | 2025-11-23 00:56:24 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:24.258972 | orchestrator | 2025-11-23 00:56:24 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:24.259564 | orchestrator | 2025-11-23 00:56:24 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:24.259587 | orchestrator | 2025-11-23 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:27.303866 | orchestrator | 2025-11-23 00:56:27 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:27.304053 | orchestrator | 2025-11-23 00:56:27 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:27.304773 | orchestrator | 2025-11-23 00:56:27 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:27.305446 | orchestrator | 2025-11-23 00:56:27 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state STARTED 2025-11-23 00:56:27.306711 | orchestrator | 2025-11-23 00:56:27 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:27.306731 | orchestrator | 2025-11-23 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:30.328498 | orchestrator | 2025-11-23 00:56:30 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:30.329109 | orchestrator | 2025-11-23 00:56:30 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:30.329936 | orchestrator | 2025-11-23 00:56:30 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:30.332119 | orchestrator | 2025-11-23 00:56:30 | INFO  | Task 8d85f1f8-9ce5-4cb0-be1a-6c0f9ceb56a4 is in state SUCCESS 2025-11-23 00:56:30.333008 | orchestrator | 2025-11-23 00:56:30 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:30.333048 | orchestrator | 2025-11-23 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:33.355488 | orchestrator | 2025-11-23 00:56:33 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:33.356138 | orchestrator | 2025-11-23 00:56:33 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:33.356712 | orchestrator | 2025-11-23 00:56:33 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:33.357734 | orchestrator | 2025-11-23 00:56:33 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:33.358007 | orchestrator | 2025-11-23 00:56:33 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:33.358180 | orchestrator | 2025-11-23 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:36.398660 | orchestrator | 2025-11-23 00:56:36 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:36.399846 | orchestrator | 2025-11-23 00:56:36 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:36.400772 | orchestrator | 2025-11-23 00:56:36 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:36.402104 | orchestrator | 2025-11-23 00:56:36 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:36.403153 | orchestrator | 2025-11-23 00:56:36 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:36.403447 | orchestrator | 2025-11-23 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:39.438409 | orchestrator | 2025-11-23 00:56:39 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:39.439714 | orchestrator | 2025-11-23 00:56:39 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:39.440815 | orchestrator | 2025-11-23 00:56:39 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:39.441679 | orchestrator | 2025-11-23 00:56:39 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:39.442312 | orchestrator | 2025-11-23 00:56:39 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:39.442454 | orchestrator | 2025-11-23 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:42.473592 | orchestrator | 2025-11-23 00:56:42 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:42.474127 | orchestrator | 2025-11-23 00:56:42 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:42.474988 | orchestrator | 2025-11-23 00:56:42 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:42.476006 | orchestrator | 2025-11-23 00:56:42 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:42.477842 | orchestrator | 2025-11-23 00:56:42 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:42.477895 | orchestrator | 2025-11-23 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:45.500749 | orchestrator | 2025-11-23 00:56:45 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:45.500921 | orchestrator | 2025-11-23 00:56:45 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:45.501464 | orchestrator | 2025-11-23 00:56:45 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:45.502577 | orchestrator | 2025-11-23 00:56:45 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:45.503148 | orchestrator | 2025-11-23 00:56:45 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:45.503171 | orchestrator | 2025-11-23 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:48.523262 | orchestrator | 2025-11-23 00:56:48 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:48.523701 | orchestrator | 2025-11-23 00:56:48 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:48.524463 | orchestrator | 2025-11-23 00:56:48 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:48.525393 | orchestrator | 2025-11-23 00:56:48 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:48.526202 | orchestrator | 2025-11-23 00:56:48 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:48.526239 | orchestrator | 2025-11-23 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:51.567168 | orchestrator | 2025-11-23 00:56:51 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:51.568412 | orchestrator | 2025-11-23 00:56:51 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:51.570722 | orchestrator | 2025-11-23 00:56:51 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:51.572452 | orchestrator | 2025-11-23 00:56:51 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:51.574009 | orchestrator | 2025-11-23 00:56:51 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:51.574419 | orchestrator | 2025-11-23 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:54.606955 | orchestrator | 2025-11-23 00:56:54 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:54.607558 | orchestrator | 2025-11-23 00:56:54 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:54.609169 | orchestrator | 2025-11-23 00:56:54 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:54.610646 | orchestrator | 2025-11-23 00:56:54 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:54.611891 | orchestrator | 2025-11-23 00:56:54 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:54.612027 | orchestrator | 2025-11-23 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:56:57.646949 | orchestrator | 2025-11-23 00:56:57 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:56:57.647055 | orchestrator | 2025-11-23 00:56:57 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:56:57.647071 | orchestrator | 2025-11-23 00:56:57 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:56:57.647084 | orchestrator | 2025-11-23 00:56:57 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:56:57.647096 | orchestrator | 2025-11-23 00:56:57 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:56:57.647108 | orchestrator | 2025-11-23 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:00.661231 | orchestrator | 2025-11-23 00:57:00 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:00.661411 | orchestrator | 2025-11-23 00:57:00 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:00.661997 | orchestrator | 2025-11-23 00:57:00 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:00.662580 | orchestrator | 2025-11-23 00:57:00 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:57:00.663219 | orchestrator | 2025-11-23 00:57:00 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:00.663236 | orchestrator | 2025-11-23 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:03.683168 | orchestrator | 2025-11-23 00:57:03 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:03.684002 | orchestrator | 2025-11-23 00:57:03 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:03.684024 | orchestrator | 2025-11-23 00:57:03 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:03.684483 | orchestrator | 2025-11-23 00:57:03 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:57:03.685136 | orchestrator | 2025-11-23 00:57:03 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:03.685248 | orchestrator | 2025-11-23 00:57:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:06.704260 | orchestrator | 2025-11-23 00:57:06 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:06.704491 | orchestrator | 2025-11-23 00:57:06 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:06.704886 | orchestrator | 2025-11-23 00:57:06 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:06.705548 | orchestrator | 2025-11-23 00:57:06 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:57:06.706239 | orchestrator | 2025-11-23 00:57:06 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:06.706391 | orchestrator | 2025-11-23 00:57:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:09.733614 | orchestrator | 2025-11-23 00:57:09 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:09.736432 | orchestrator | 2025-11-23 00:57:09 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:09.736958 | orchestrator | 2025-11-23 00:57:09 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:09.737498 | orchestrator | 2025-11-23 00:57:09 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:57:09.741034 | orchestrator | 2025-11-23 00:57:09 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:09.741064 | orchestrator | 2025-11-23 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:12.766086 | orchestrator | 2025-11-23 00:57:12 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:12.768173 | orchestrator | 2025-11-23 00:57:12 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:12.768751 | orchestrator | 2025-11-23 00:57:12 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:12.769820 | orchestrator | 2025-11-23 00:57:12 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:57:12.772353 | orchestrator | 2025-11-23 00:57:12 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:12.772407 | orchestrator | 2025-11-23 00:57:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:15.793086 | orchestrator | 2025-11-23 00:57:15 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:15.793177 | orchestrator | 2025-11-23 00:57:15 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:15.794263 | orchestrator | 2025-11-23 00:57:15 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:15.794684 | orchestrator | 2025-11-23 00:57:15 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:57:15.795321 | orchestrator | 2025-11-23 00:57:15 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:15.795487 | orchestrator | 2025-11-23 00:57:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:18.817418 | orchestrator | 2025-11-23 00:57:18 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:18.817543 | orchestrator | 2025-11-23 00:57:18 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:18.817561 | orchestrator | 2025-11-23 00:57:18 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:18.817900 | orchestrator | 2025-11-23 00:57:18 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state STARTED 2025-11-23 00:57:18.818499 | orchestrator | 2025-11-23 00:57:18 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:18.818578 | orchestrator | 2025-11-23 00:57:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:21.842152 | orchestrator | 2025-11-23 00:57:21 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:21.844379 | orchestrator | 2025-11-23 00:57:21 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:21.847192 | orchestrator | 2025-11-23 00:57:21 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:21.848505 | orchestrator | 2025-11-23 00:57:21 | INFO  | Task 8df3355d-473e-41cb-8092-6b45ea053bc0 is in state SUCCESS 2025-11-23 00:57:21.849839 | orchestrator | 2025-11-23 00:57:21.849867 | orchestrator | 2025-11-23 00:57:21.849878 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:57:21.849889 | orchestrator | 2025-11-23 00:57:21.849899 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:57:21.849909 | orchestrator | Sunday 23 November 2025 00:55:56 +0000 (0:00:00.288) 0:00:00.288 ******* 2025-11-23 00:57:21.849919 | orchestrator | ok: [testbed-manager] 2025-11-23 00:57:21.849929 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:57:21.849939 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:57:21.849948 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:57:21.849957 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:57:21.849966 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:57:21.850012 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:57:21.850068 | orchestrator | 2025-11-23 00:57:21.850078 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:57:21.850088 | orchestrator | Sunday 23 November 2025 00:55:57 +0000 (0:00:00.994) 0:00:01.283 ******* 2025-11-23 00:57:21.850098 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-11-23 00:57:21.850108 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-11-23 00:57:21.850117 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-11-23 00:57:21.850127 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-11-23 00:57:21.850136 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-11-23 00:57:21.850145 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-11-23 00:57:21.850154 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-11-23 00:57:21.850163 | orchestrator | 2025-11-23 00:57:21.850197 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-23 00:57:21.850208 | orchestrator | 2025-11-23 00:57:21.850219 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-11-23 00:57:21.850229 | orchestrator | Sunday 23 November 2025 00:55:58 +0000 (0:00:00.896) 0:00:02.180 ******* 2025-11-23 00:57:21.850240 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:57:21.850253 | orchestrator | 2025-11-23 00:57:21.850263 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-11-23 00:57:21.850273 | orchestrator | Sunday 23 November 2025 00:56:00 +0000 (0:00:01.885) 0:00:04.066 ******* 2025-11-23 00:57:21.850283 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-11-23 00:57:21.850292 | orchestrator | 2025-11-23 00:57:21.850335 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-11-23 00:57:21.850346 | orchestrator | Sunday 23 November 2025 00:56:04 +0000 (0:00:03.792) 0:00:07.858 ******* 2025-11-23 00:57:21.850356 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-11-23 00:57:21.850383 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-11-23 00:57:21.850394 | orchestrator | 2025-11-23 00:57:21.850403 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-11-23 00:57:21.850412 | orchestrator | Sunday 23 November 2025 00:56:10 +0000 (0:00:06.203) 0:00:14.061 ******* 2025-11-23 00:57:21.850422 | orchestrator | changed: [testbed-manager] => (item=service) 2025-11-23 00:57:21.850432 | orchestrator | 2025-11-23 00:57:21.850441 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-11-23 00:57:21.850451 | orchestrator | Sunday 23 November 2025 00:56:14 +0000 (0:00:03.290) 0:00:17.352 ******* 2025-11-23 00:57:21.850463 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 00:57:21.850474 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-11-23 00:57:21.850485 | orchestrator | 2025-11-23 00:57:21.850495 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-11-23 00:57:21.850506 | orchestrator | Sunday 23 November 2025 00:56:18 +0000 (0:00:04.031) 0:00:21.384 ******* 2025-11-23 00:57:21.850517 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-11-23 00:57:21.850528 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-11-23 00:57:21.850539 | orchestrator | 2025-11-23 00:57:21.850549 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-11-23 00:57:21.850560 | orchestrator | Sunday 23 November 2025 00:56:24 +0000 (0:00:06.534) 0:00:27.918 ******* 2025-11-23 00:57:21.850571 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-11-23 00:57:21.850581 | orchestrator | 2025-11-23 00:57:21.850592 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:57:21.850604 | orchestrator | testbed-manager : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.850615 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.850626 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.850638 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.850648 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.850672 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.850693 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.850704 | orchestrator | 2025-11-23 00:57:21.850715 | orchestrator | 2025-11-23 00:57:21.850726 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:57:21.850736 | orchestrator | Sunday 23 November 2025 00:56:29 +0000 (0:00:04.921) 0:00:32.839 ******* 2025-11-23 00:57:21.850747 | orchestrator | =============================================================================== 2025-11-23 00:57:21.850758 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.53s 2025-11-23 00:57:21.850768 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.20s 2025-11-23 00:57:21.850779 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.92s 2025-11-23 00:57:21.850790 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.03s 2025-11-23 00:57:21.850801 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.79s 2025-11-23 00:57:21.850811 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.29s 2025-11-23 00:57:21.850821 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.89s 2025-11-23 00:57:21.850830 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2025-11-23 00:57:21.850839 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-11-23 00:57:21.850849 | orchestrator | 2025-11-23 00:57:21.850858 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-23 00:57:21.850868 | orchestrator | 2.16.14 2025-11-23 00:57:21.850878 | orchestrator | 2025-11-23 00:57:21.850888 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-11-23 00:57:21.850897 | orchestrator | 2025-11-23 00:57:21.850907 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-11-23 00:57:21.850916 | orchestrator | Sunday 23 November 2025 00:55:50 +0000 (0:00:00.240) 0:00:00.240 ******* 2025-11-23 00:57:21.850926 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.850935 | orchestrator | 2025-11-23 00:57:21.850945 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-11-23 00:57:21.850954 | orchestrator | Sunday 23 November 2025 00:55:52 +0000 (0:00:01.717) 0:00:01.957 ******* 2025-11-23 00:57:21.850963 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.850972 | orchestrator | 2025-11-23 00:57:21.850982 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-11-23 00:57:21.850991 | orchestrator | Sunday 23 November 2025 00:55:52 +0000 (0:00:00.892) 0:00:02.849 ******* 2025-11-23 00:57:21.851000 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.851010 | orchestrator | 2025-11-23 00:57:21.851019 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-11-23 00:57:21.851028 | orchestrator | Sunday 23 November 2025 00:55:53 +0000 (0:00:00.875) 0:00:03.725 ******* 2025-11-23 00:57:21.851038 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.851047 | orchestrator | 2025-11-23 00:57:21.851061 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-11-23 00:57:21.851071 | orchestrator | Sunday 23 November 2025 00:55:55 +0000 (0:00:01.721) 0:00:05.446 ******* 2025-11-23 00:57:21.851080 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.851090 | orchestrator | 2025-11-23 00:57:21.851099 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-11-23 00:57:21.851109 | orchestrator | Sunday 23 November 2025 00:55:56 +0000 (0:00:00.952) 0:00:06.398 ******* 2025-11-23 00:57:21.851118 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.851127 | orchestrator | 2025-11-23 00:57:21.851137 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-11-23 00:57:21.851146 | orchestrator | Sunday 23 November 2025 00:55:57 +0000 (0:00:01.094) 0:00:07.492 ******* 2025-11-23 00:57:21.851162 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.851172 | orchestrator | 2025-11-23 00:57:21.851181 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-11-23 00:57:21.851190 | orchestrator | Sunday 23 November 2025 00:55:58 +0000 (0:00:01.060) 0:00:08.553 ******* 2025-11-23 00:57:21.851200 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.851209 | orchestrator | 2025-11-23 00:57:21.851219 | orchestrator | TASK [Create admin user] ******************************************************* 2025-11-23 00:57:21.851228 | orchestrator | Sunday 23 November 2025 00:55:59 +0000 (0:00:01.120) 0:00:09.674 ******* 2025-11-23 00:57:21.851237 | orchestrator | changed: [testbed-manager] 2025-11-23 00:57:21.851247 | orchestrator | 2025-11-23 00:57:21.851256 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-11-23 00:57:21.851266 | orchestrator | Sunday 23 November 2025 00:56:54 +0000 (0:00:55.040) 0:01:04.714 ******* 2025-11-23 00:57:21.851275 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:57:21.851284 | orchestrator | 2025-11-23 00:57:21.851294 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-23 00:57:21.851331 | orchestrator | 2025-11-23 00:57:21.851341 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-23 00:57:21.851351 | orchestrator | Sunday 23 November 2025 00:56:54 +0000 (0:00:00.142) 0:01:04.856 ******* 2025-11-23 00:57:21.851360 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:57:21.851370 | orchestrator | 2025-11-23 00:57:21.851379 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-23 00:57:21.851389 | orchestrator | 2025-11-23 00:57:21.851399 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-23 00:57:21.851408 | orchestrator | Sunday 23 November 2025 00:56:56 +0000 (0:00:01.524) 0:01:06.381 ******* 2025-11-23 00:57:21.851418 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:57:21.851427 | orchestrator | 2025-11-23 00:57:21.851437 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-23 00:57:21.851446 | orchestrator | 2025-11-23 00:57:21.851456 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-23 00:57:21.851472 | orchestrator | Sunday 23 November 2025 00:57:07 +0000 (0:00:11.303) 0:01:17.684 ******* 2025-11-23 00:57:21.851481 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:57:21.851491 | orchestrator | 2025-11-23 00:57:21.851500 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:57:21.851510 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-23 00:57:21.851520 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.851529 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.851539 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 00:57:21.851548 | orchestrator | 2025-11-23 00:57:21.851558 | orchestrator | 2025-11-23 00:57:21.851567 | orchestrator | 2025-11-23 00:57:21.851577 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:57:21.851586 | orchestrator | Sunday 23 November 2025 00:57:19 +0000 (0:00:11.295) 0:01:28.979 ******* 2025-11-23 00:57:21.851596 | orchestrator | =============================================================================== 2025-11-23 00:57:21.851605 | orchestrator | Create admin user ------------------------------------------------------ 55.04s 2025-11-23 00:57:21.851614 | orchestrator | Restart ceph manager service ------------------------------------------- 24.12s 2025-11-23 00:57:21.851623 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.72s 2025-11-23 00:57:21.851633 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.72s 2025-11-23 00:57:21.851655 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.12s 2025-11-23 00:57:21.851671 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.09s 2025-11-23 00:57:21.851687 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.06s 2025-11-23 00:57:21.851702 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.95s 2025-11-23 00:57:21.851712 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.89s 2025-11-23 00:57:21.851722 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.88s 2025-11-23 00:57:21.851731 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-11-23 00:57:21.851816 | orchestrator | 2025-11-23 00:57:21 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:21.851829 | orchestrator | 2025-11-23 00:57:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:24.886248 | orchestrator | 2025-11-23 00:57:24 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:24.886554 | orchestrator | 2025-11-23 00:57:24 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:24.887833 | orchestrator | 2025-11-23 00:57:24 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:24.888403 | orchestrator | 2025-11-23 00:57:24 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:24.888748 | orchestrator | 2025-11-23 00:57:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:27.929836 | orchestrator | 2025-11-23 00:57:27 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:27.930124 | orchestrator | 2025-11-23 00:57:27 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:27.932111 | orchestrator | 2025-11-23 00:57:27 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:27.933527 | orchestrator | 2025-11-23 00:57:27 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:27.933560 | orchestrator | 2025-11-23 00:57:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:30.970066 | orchestrator | 2025-11-23 00:57:30 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:30.970174 | orchestrator | 2025-11-23 00:57:30 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:30.972150 | orchestrator | 2025-11-23 00:57:30 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:30.972706 | orchestrator | 2025-11-23 00:57:30 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:30.972815 | orchestrator | 2025-11-23 00:57:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:34.009093 | orchestrator | 2025-11-23 00:57:34 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:34.009188 | orchestrator | 2025-11-23 00:57:34 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:34.009758 | orchestrator | 2025-11-23 00:57:34 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:34.010248 | orchestrator | 2025-11-23 00:57:34 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:34.010277 | orchestrator | 2025-11-23 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:37.039782 | orchestrator | 2025-11-23 00:57:37 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:37.040036 | orchestrator | 2025-11-23 00:57:37 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:37.040693 | orchestrator | 2025-11-23 00:57:37 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:37.041330 | orchestrator | 2025-11-23 00:57:37 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:37.041354 | orchestrator | 2025-11-23 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:40.071257 | orchestrator | 2025-11-23 00:57:40 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:40.073536 | orchestrator | 2025-11-23 00:57:40 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:40.075394 | orchestrator | 2025-11-23 00:57:40 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:40.076950 | orchestrator | 2025-11-23 00:57:40 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:40.077707 | orchestrator | 2025-11-23 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:43.120857 | orchestrator | 2025-11-23 00:57:43 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:43.121230 | orchestrator | 2025-11-23 00:57:43 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:43.122983 | orchestrator | 2025-11-23 00:57:43 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:43.125733 | orchestrator | 2025-11-23 00:57:43 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:43.125767 | orchestrator | 2025-11-23 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:46.172412 | orchestrator | 2025-11-23 00:57:46 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:46.174231 | orchestrator | 2025-11-23 00:57:46 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:46.176820 | orchestrator | 2025-11-23 00:57:46 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:46.179564 | orchestrator | 2025-11-23 00:57:46 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:46.179712 | orchestrator | 2025-11-23 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:49.212164 | orchestrator | 2025-11-23 00:57:49 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:49.212603 | orchestrator | 2025-11-23 00:57:49 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:49.213546 | orchestrator | 2025-11-23 00:57:49 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:49.214428 | orchestrator | 2025-11-23 00:57:49 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:49.214461 | orchestrator | 2025-11-23 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:52.257144 | orchestrator | 2025-11-23 00:57:52 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:52.259183 | orchestrator | 2025-11-23 00:57:52 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:52.261061 | orchestrator | 2025-11-23 00:57:52 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:52.262666 | orchestrator | 2025-11-23 00:57:52 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:52.262815 | orchestrator | 2025-11-23 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:55.306407 | orchestrator | 2025-11-23 00:57:55 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:55.307890 | orchestrator | 2025-11-23 00:57:55 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:55.309556 | orchestrator | 2025-11-23 00:57:55 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:55.311633 | orchestrator | 2025-11-23 00:57:55 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:55.311713 | orchestrator | 2025-11-23 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:57:58.352212 | orchestrator | 2025-11-23 00:57:58 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:57:58.352544 | orchestrator | 2025-11-23 00:57:58 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:57:58.353193 | orchestrator | 2025-11-23 00:57:58 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:57:58.354410 | orchestrator | 2025-11-23 00:57:58 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:57:58.354477 | orchestrator | 2025-11-23 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:01.394602 | orchestrator | 2025-11-23 00:58:01 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:01.395928 | orchestrator | 2025-11-23 00:58:01 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:01.397747 | orchestrator | 2025-11-23 00:58:01 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:01.399372 | orchestrator | 2025-11-23 00:58:01 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:01.399422 | orchestrator | 2025-11-23 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:04.444736 | orchestrator | 2025-11-23 00:58:04 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:04.446515 | orchestrator | 2025-11-23 00:58:04 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:04.448687 | orchestrator | 2025-11-23 00:58:04 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:04.450915 | orchestrator | 2025-11-23 00:58:04 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:04.450949 | orchestrator | 2025-11-23 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:07.493349 | orchestrator | 2025-11-23 00:58:07 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:07.495293 | orchestrator | 2025-11-23 00:58:07 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:07.497833 | orchestrator | 2025-11-23 00:58:07 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:07.499412 | orchestrator | 2025-11-23 00:58:07 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:07.499441 | orchestrator | 2025-11-23 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:10.528261 | orchestrator | 2025-11-23 00:58:10 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:10.529269 | orchestrator | 2025-11-23 00:58:10 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:10.530549 | orchestrator | 2025-11-23 00:58:10 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:10.530799 | orchestrator | 2025-11-23 00:58:10 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:10.530836 | orchestrator | 2025-11-23 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:13.595695 | orchestrator | 2025-11-23 00:58:13 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:13.600557 | orchestrator | 2025-11-23 00:58:13 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:13.602877 | orchestrator | 2025-11-23 00:58:13 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:13.602940 | orchestrator | 2025-11-23 00:58:13 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:13.602960 | orchestrator | 2025-11-23 00:58:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:16.626546 | orchestrator | 2025-11-23 00:58:16 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:16.626963 | orchestrator | 2025-11-23 00:58:16 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:16.628234 | orchestrator | 2025-11-23 00:58:16 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:16.629009 | orchestrator | 2025-11-23 00:58:16 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:16.629133 | orchestrator | 2025-11-23 00:58:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:19.675895 | orchestrator | 2025-11-23 00:58:19 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:19.676412 | orchestrator | 2025-11-23 00:58:19 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:19.677243 | orchestrator | 2025-11-23 00:58:19 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:19.678442 | orchestrator | 2025-11-23 00:58:19 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:19.678551 | orchestrator | 2025-11-23 00:58:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:22.701423 | orchestrator | 2025-11-23 00:58:22 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:22.705005 | orchestrator | 2025-11-23 00:58:22 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:22.707795 | orchestrator | 2025-11-23 00:58:22 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:22.710710 | orchestrator | 2025-11-23 00:58:22 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:22.710970 | orchestrator | 2025-11-23 00:58:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:25.740616 | orchestrator | 2025-11-23 00:58:25 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:25.741995 | orchestrator | 2025-11-23 00:58:25 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:25.743452 | orchestrator | 2025-11-23 00:58:25 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:25.745434 | orchestrator | 2025-11-23 00:58:25 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:25.745461 | orchestrator | 2025-11-23 00:58:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:28.796193 | orchestrator | 2025-11-23 00:58:28 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:28.796376 | orchestrator | 2025-11-23 00:58:28 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:28.796745 | orchestrator | 2025-11-23 00:58:28 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:28.797879 | orchestrator | 2025-11-23 00:58:28 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:28.797910 | orchestrator | 2025-11-23 00:58:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:31.834545 | orchestrator | 2025-11-23 00:58:31 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:31.835859 | orchestrator | 2025-11-23 00:58:31 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:31.837456 | orchestrator | 2025-11-23 00:58:31 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:31.838969 | orchestrator | 2025-11-23 00:58:31 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:31.838991 | orchestrator | 2025-11-23 00:58:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:34.881854 | orchestrator | 2025-11-23 00:58:34 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:34.882973 | orchestrator | 2025-11-23 00:58:34 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:34.884878 | orchestrator | 2025-11-23 00:58:34 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:34.886770 | orchestrator | 2025-11-23 00:58:34 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:34.886914 | orchestrator | 2025-11-23 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:37.929192 | orchestrator | 2025-11-23 00:58:37 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:37.930706 | orchestrator | 2025-11-23 00:58:37 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:37.932754 | orchestrator | 2025-11-23 00:58:37 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:37.934177 | orchestrator | 2025-11-23 00:58:37 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:37.934200 | orchestrator | 2025-11-23 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:40.968236 | orchestrator | 2025-11-23 00:58:40 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:40.969937 | orchestrator | 2025-11-23 00:58:40 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:40.971117 | orchestrator | 2025-11-23 00:58:40 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:40.972235 | orchestrator | 2025-11-23 00:58:40 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:40.972262 | orchestrator | 2025-11-23 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:44.011692 | orchestrator | 2025-11-23 00:58:44 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:44.014222 | orchestrator | 2025-11-23 00:58:44 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:44.015814 | orchestrator | 2025-11-23 00:58:44 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:44.019102 | orchestrator | 2025-11-23 00:58:44 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state STARTED 2025-11-23 00:58:44.019276 | orchestrator | 2025-11-23 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:47.046163 | orchestrator | 2025-11-23 00:58:47 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:47.051456 | orchestrator | 2025-11-23 00:58:47 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:47.052396 | orchestrator | 2025-11-23 00:58:47 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:47.053263 | orchestrator | 2025-11-23 00:58:47 | INFO  | Task 5bf3fef0-947e-4dab-b543-0bc3dbf9b200 is in state SUCCESS 2025-11-23 00:58:47.054881 | orchestrator | 2025-11-23 00:58:47.054965 | orchestrator | 2025-11-23 00:58:47.055009 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:58:47.055019 | orchestrator | 2025-11-23 00:58:47.055028 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:58:47.055040 | orchestrator | Sunday 23 November 2025 00:55:56 +0000 (0:00:00.243) 0:00:00.243 ******* 2025-11-23 00:58:47.055056 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:58:47.055072 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:58:47.055086 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:58:47.055100 | orchestrator | 2025-11-23 00:58:47.055144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:58:47.055159 | orchestrator | Sunday 23 November 2025 00:55:56 +0000 (0:00:00.294) 0:00:00.538 ******* 2025-11-23 00:58:47.055173 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-11-23 00:58:47.055189 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-11-23 00:58:47.055203 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-11-23 00:58:47.055218 | orchestrator | 2025-11-23 00:58:47.055232 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-11-23 00:58:47.055247 | orchestrator | 2025-11-23 00:58:47.055262 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-23 00:58:47.055276 | orchestrator | Sunday 23 November 2025 00:55:57 +0000 (0:00:00.647) 0:00:01.185 ******* 2025-11-23 00:58:47.055290 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:58:47.055354 | orchestrator | 2025-11-23 00:58:47.055368 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-11-23 00:58:47.055383 | orchestrator | Sunday 23 November 2025 00:55:58 +0000 (0:00:00.680) 0:00:01.865 ******* 2025-11-23 00:58:47.055397 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-11-23 00:58:47.055411 | orchestrator | 2025-11-23 00:58:47.055426 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-11-23 00:58:47.055441 | orchestrator | Sunday 23 November 2025 00:56:03 +0000 (0:00:04.850) 0:00:06.716 ******* 2025-11-23 00:58:47.055456 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-11-23 00:58:47.055472 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-11-23 00:58:47.055487 | orchestrator | 2025-11-23 00:58:47.055502 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-11-23 00:58:47.055518 | orchestrator | Sunday 23 November 2025 00:56:10 +0000 (0:00:07.121) 0:00:13.837 ******* 2025-11-23 00:58:47.055532 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 00:58:47.055546 | orchestrator | 2025-11-23 00:58:47.055555 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-11-23 00:58:47.055563 | orchestrator | Sunday 23 November 2025 00:56:13 +0000 (0:00:03.527) 0:00:17.364 ******* 2025-11-23 00:58:47.055572 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 00:58:47.055581 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-11-23 00:58:47.055590 | orchestrator | 2025-11-23 00:58:47.055599 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-11-23 00:58:47.055630 | orchestrator | Sunday 23 November 2025 00:56:18 +0000 (0:00:04.779) 0:00:22.144 ******* 2025-11-23 00:58:47.055639 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 00:58:47.055648 | orchestrator | 2025-11-23 00:58:47.055657 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-11-23 00:58:47.055665 | orchestrator | Sunday 23 November 2025 00:56:22 +0000 (0:00:04.024) 0:00:26.169 ******* 2025-11-23 00:58:47.055674 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-11-23 00:58:47.055683 | orchestrator | 2025-11-23 00:58:47.055691 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-11-23 00:58:47.055700 | orchestrator | Sunday 23 November 2025 00:56:26 +0000 (0:00:04.006) 0:00:30.175 ******* 2025-11-23 00:58:47.055741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.055756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.055773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.055783 | orchestrator | 2025-11-23 00:58:47.055792 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-23 00:58:47.055801 | orchestrator | Sunday 23 November 2025 00:56:30 +0000 (0:00:03.600) 0:00:33.775 ******* 2025-11-23 00:58:47.055810 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:58:47.055819 | orchestrator | 2025-11-23 00:58:47.055833 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-11-23 00:58:47.055842 | orchestrator | Sunday 23 November 2025 00:56:30 +0000 (0:00:00.600) 0:00:34.376 ******* 2025-11-23 00:58:47.055850 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.055859 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:47.055868 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:47.055876 | orchestrator | 2025-11-23 00:58:47.055885 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-11-23 00:58:47.055898 | orchestrator | Sunday 23 November 2025 00:56:34 +0000 (0:00:03.435) 0:00:37.812 ******* 2025-11-23 00:58:47.055907 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 00:58:47.055916 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 00:58:47.055924 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 00:58:47.055934 | orchestrator | 2025-11-23 00:58:47.055949 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-11-23 00:58:47.055963 | orchestrator | Sunday 23 November 2025 00:56:35 +0000 (0:00:01.626) 0:00:39.439 ******* 2025-11-23 00:58:47.055976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 00:58:47.055990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 00:58:47.056004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 00:58:47.056027 | orchestrator | 2025-11-23 00:58:47.056066 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-11-23 00:58:47.056082 | orchestrator | Sunday 23 November 2025 00:56:37 +0000 (0:00:01.139) 0:00:40.578 ******* 2025-11-23 00:58:47.056096 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:58:47.056105 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:58:47.056114 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:58:47.056122 | orchestrator | 2025-11-23 00:58:47.056131 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-11-23 00:58:47.056140 | orchestrator | Sunday 23 November 2025 00:56:37 +0000 (0:00:00.589) 0:00:41.168 ******* 2025-11-23 00:58:47.056149 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.056157 | orchestrator | 2025-11-23 00:58:47.056172 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-11-23 00:58:47.056186 | orchestrator | Sunday 23 November 2025 00:56:37 +0000 (0:00:00.231) 0:00:41.400 ******* 2025-11-23 00:58:47.056200 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.056215 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.056230 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.056244 | orchestrator | 2025-11-23 00:58:47.056259 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-23 00:58:47.056273 | orchestrator | Sunday 23 November 2025 00:56:38 +0000 (0:00:00.248) 0:00:41.648 ******* 2025-11-23 00:58:47.056288 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 00:58:47.056324 | orchestrator | 2025-11-23 00:58:47.056340 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-11-23 00:58:47.056356 | orchestrator | Sunday 23 November 2025 00:56:38 +0000 (0:00:00.486) 0:00:42.134 ******* 2025-11-23 00:58:47.056383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.056410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.056429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.056440 | orchestrator | 2025-11-23 00:58:47.056448 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-11-23 00:58:47.056457 | orchestrator | Sunday 23 November 2025 00:56:43 +0000 (0:00:04.614) 0:00:46.749 ******* 2025-11-23 00:58:47.056478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:58:47.056495 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.056505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:58:47.056514 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.056534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:58:47.056550 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.056559 | orchestrator | 2025-11-23 00:58:47.056568 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-11-23 00:58:47.056577 | orchestrator | Sunday 23 November 2025 00:56:45 +0000 (0:00:02.559) 0:00:49.309 ******* 2025-11-23 00:58:47.056586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:58:47.056595 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.056611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:58:47.056626 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.056639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-23 00:58:47.056655 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.056669 | orchestrator | 2025-11-23 00:58:47.056685 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-11-23 00:58:47.056700 | orchestrator | Sunday 23 November 2025 00:56:48 +0000 (0:00:02.827) 0:00:52.137 ******* 2025-11-23 00:58:47.056714 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.056729 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.056745 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.056758 | orchestrator | 2025-11-23 00:58:47.056772 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-11-23 00:58:47.056781 | orchestrator | Sunday 23 November 2025 00:56:51 +0000 (0:00:03.091) 0:00:55.228 ******* 2025-11-23 00:58:47.056796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.056823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.056833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.056843 | orchestrator | 2025-11-23 00:58:47.056852 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-11-23 00:58:47.056860 | orchestrator | Sunday 23 November 2025 00:56:55 +0000 (0:00:03.576) 0:00:58.805 ******* 2025-11-23 00:58:47.056869 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.056883 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:47.056892 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:47.056900 | orchestrator | 2025-11-23 00:58:47.056909 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-11-23 00:58:47.056918 | orchestrator | Sunday 23 November 2025 00:57:02 +0000 (0:00:07.313) 0:01:06.118 ******* 2025-11-23 00:58:47.056926 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.056934 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.056943 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.056952 | orchestrator | 2025-11-23 00:58:47.056960 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-11-23 00:58:47.057107 | orchestrator | Sunday 23 November 2025 00:57:07 +0000 (0:00:04.620) 0:01:10.739 ******* 2025-11-23 00:58:47.057120 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.057128 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.057137 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.057146 | orchestrator | 2025-11-23 00:58:47.057154 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-11-23 00:58:47.057163 | orchestrator | Sunday 23 November 2025 00:57:11 +0000 (0:00:03.887) 0:01:14.626 ******* 2025-11-23 00:58:47.057177 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.057186 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.057194 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.057203 | orchestrator | 2025-11-23 00:58:47.057211 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-11-23 00:58:47.057220 | orchestrator | Sunday 23 November 2025 00:57:15 +0000 (0:00:03.983) 0:01:18.609 ******* 2025-11-23 00:58:47.057228 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.057237 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.057245 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.057254 | orchestrator | 2025-11-23 00:58:47.057268 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-11-23 00:58:47.057283 | orchestrator | Sunday 23 November 2025 00:57:18 +0000 (0:00:03.373) 0:01:21.982 ******* 2025-11-23 00:58:47.057318 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.057335 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.057350 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.057365 | orchestrator | 2025-11-23 00:58:47.057381 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-11-23 00:58:47.057397 | orchestrator | Sunday 23 November 2025 00:57:18 +0000 (0:00:00.260) 0:01:22.243 ******* 2025-11-23 00:58:47.057412 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-23 00:58:47.057427 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.057441 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-23 00:58:47.057455 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.057470 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-23 00:58:47.057486 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.057500 | orchestrator | 2025-11-23 00:58:47.057514 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-11-23 00:58:47.057529 | orchestrator | Sunday 23 November 2025 00:57:22 +0000 (0:00:03.838) 0:01:26.081 ******* 2025-11-23 00:58:47.057547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.057599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.057614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-23 00:58:47.057631 | orchestrator | 2025-11-23 00:58:47.057640 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-23 00:58:47.057649 | orchestrator | Sunday 23 November 2025 00:57:27 +0000 (0:00:04.836) 0:01:30.918 ******* 2025-11-23 00:58:47.057657 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:47.057666 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:47.057681 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:47.057696 | orchestrator | 2025-11-23 00:58:47.057710 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-11-23 00:58:47.057725 | orchestrator | Sunday 23 November 2025 00:57:28 +0000 (0:00:00.728) 0:01:31.647 ******* 2025-11-23 00:58:47.057741 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.057756 | orchestrator | 2025-11-23 00:58:47.057770 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-11-23 00:58:47.057786 | orchestrator | Sunday 23 November 2025 00:57:30 +0000 (0:00:02.563) 0:01:34.210 ******* 2025-11-23 00:58:47.057802 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.057818 | orchestrator | 2025-11-23 00:58:47.057833 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-11-23 00:58:47.057846 | orchestrator | Sunday 23 November 2025 00:57:33 +0000 (0:00:02.572) 0:01:36.782 ******* 2025-11-23 00:58:47.057857 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.057867 | orchestrator | 2025-11-23 00:58:47.057876 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-11-23 00:58:47.057886 | orchestrator | Sunday 23 November 2025 00:57:35 +0000 (0:00:01.942) 0:01:38.724 ******* 2025-11-23 00:58:47.057897 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.057906 | orchestrator | 2025-11-23 00:58:47.057916 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-11-23 00:58:47.057926 | orchestrator | Sunday 23 November 2025 00:58:08 +0000 (0:00:33.237) 0:02:11.962 ******* 2025-11-23 00:58:47.057937 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.057947 | orchestrator | 2025-11-23 00:58:47.057961 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-23 00:58:47.057970 | orchestrator | Sunday 23 November 2025 00:58:10 +0000 (0:00:02.244) 0:02:14.207 ******* 2025-11-23 00:58:47.057979 | orchestrator | 2025-11-23 00:58:47.057987 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-23 00:58:47.057996 | orchestrator | Sunday 23 November 2025 00:58:10 +0000 (0:00:00.132) 0:02:14.340 ******* 2025-11-23 00:58:47.058004 | orchestrator | 2025-11-23 00:58:47.058062 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-23 00:58:47.058074 | orchestrator | Sunday 23 November 2025 00:58:10 +0000 (0:00:00.145) 0:02:14.485 ******* 2025-11-23 00:58:47.058082 | orchestrator | 2025-11-23 00:58:47.058091 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-11-23 00:58:47.058099 | orchestrator | Sunday 23 November 2025 00:58:11 +0000 (0:00:00.156) 0:02:14.642 ******* 2025-11-23 00:58:47.058108 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:47.058116 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:47.058125 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:47.058133 | orchestrator | 2025-11-23 00:58:47.058142 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:58:47.058152 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-23 00:58:47.058170 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-23 00:58:47.058178 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-23 00:58:47.058187 | orchestrator | 2025-11-23 00:58:47.058195 | orchestrator | 2025-11-23 00:58:47.058204 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:58:47.058212 | orchestrator | Sunday 23 November 2025 00:58:46 +0000 (0:00:35.008) 0:02:49.650 ******* 2025-11-23 00:58:47.058221 | orchestrator | =============================================================================== 2025-11-23 00:58:47.058229 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.01s 2025-11-23 00:58:47.058238 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 33.24s 2025-11-23 00:58:47.058246 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.31s 2025-11-23 00:58:47.058255 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.12s 2025-11-23 00:58:47.058263 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.85s 2025-11-23 00:58:47.058272 | orchestrator | glance : Check glance containers ---------------------------------------- 4.84s 2025-11-23 00:58:47.058280 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.78s 2025-11-23 00:58:47.058289 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.62s 2025-11-23 00:58:47.058350 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.61s 2025-11-23 00:58:47.058360 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.02s 2025-11-23 00:58:47.058368 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.01s 2025-11-23 00:58:47.058377 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.98s 2025-11-23 00:58:47.058385 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.89s 2025-11-23 00:58:47.058394 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.84s 2025-11-23 00:58:47.058402 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.60s 2025-11-23 00:58:47.058411 | orchestrator | glance : Copying over config.json files for services -------------------- 3.58s 2025-11-23 00:58:47.058419 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.53s 2025-11-23 00:58:47.058428 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.44s 2025-11-23 00:58:47.058436 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.37s 2025-11-23 00:58:47.058445 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.09s 2025-11-23 00:58:47.058453 | orchestrator | 2025-11-23 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:50.104131 | orchestrator | 2025-11-23 00:58:50 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:50.106224 | orchestrator | 2025-11-23 00:58:50 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:50.108464 | orchestrator | 2025-11-23 00:58:50 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:50.110824 | orchestrator | 2025-11-23 00:58:50 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:58:50.111895 | orchestrator | 2025-11-23 00:58:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:53.151724 | orchestrator | 2025-11-23 00:58:53 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:53.153428 | orchestrator | 2025-11-23 00:58:53 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:53.155153 | orchestrator | 2025-11-23 00:58:53 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state STARTED 2025-11-23 00:58:53.156931 | orchestrator | 2025-11-23 00:58:53 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:58:53.156997 | orchestrator | 2025-11-23 00:58:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:56.198214 | orchestrator | 2025-11-23 00:58:56 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:56.198670 | orchestrator | 2025-11-23 00:58:56 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:56.203025 | orchestrator | 2025-11-23 00:58:56 | INFO  | Task df1a576d-f1b9-4c9c-81e1-e77e0ef6ac69 is in state SUCCESS 2025-11-23 00:58:56.204812 | orchestrator | 2025-11-23 00:58:56.204861 | orchestrator | 2025-11-23 00:58:56.204874 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 00:58:56.204912 | orchestrator | 2025-11-23 00:58:56.204925 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 00:58:56.204937 | orchestrator | Sunday 23 November 2025 00:55:50 +0000 (0:00:00.245) 0:00:00.245 ******* 2025-11-23 00:58:56.204948 | orchestrator | ok: [testbed-manager] 2025-11-23 00:58:56.204961 | orchestrator | ok: [testbed-node-0] 2025-11-23 00:58:56.204972 | orchestrator | ok: [testbed-node-1] 2025-11-23 00:58:56.204983 | orchestrator | ok: [testbed-node-2] 2025-11-23 00:58:56.204994 | orchestrator | ok: [testbed-node-3] 2025-11-23 00:58:56.205004 | orchestrator | ok: [testbed-node-4] 2025-11-23 00:58:56.205015 | orchestrator | ok: [testbed-node-5] 2025-11-23 00:58:56.205025 | orchestrator | 2025-11-23 00:58:56.205036 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 00:58:56.205047 | orchestrator | Sunday 23 November 2025 00:55:50 +0000 (0:00:00.745) 0:00:00.991 ******* 2025-11-23 00:58:56.205058 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-11-23 00:58:56.205069 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-11-23 00:58:56.205080 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-11-23 00:58:56.205091 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-11-23 00:58:56.205101 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-11-23 00:58:56.205112 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-11-23 00:58:56.205122 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-11-23 00:58:56.205133 | orchestrator | 2025-11-23 00:58:56.205144 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-11-23 00:58:56.205154 | orchestrator | 2025-11-23 00:58:56.205179 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-23 00:58:56.205191 | orchestrator | Sunday 23 November 2025 00:55:51 +0000 (0:00:00.713) 0:00:01.705 ******* 2025-11-23 00:58:56.205326 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:58:56.205376 | orchestrator | 2025-11-23 00:58:56.205389 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-11-23 00:58:56.205401 | orchestrator | Sunday 23 November 2025 00:55:53 +0000 (0:00:01.473) 0:00:03.178 ******* 2025-11-23 00:58:56.205417 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-23 00:58:56.205458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.205471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.205494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.205565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.205612 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.205625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.205638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.205650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.205671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.205683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.205700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.205719 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.205732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.205744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.205757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.205841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.205853 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-23 00:58:56.205875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.205895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.205907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.205919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.205930 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206264 | orchestrator | 2025-11-23 00:58:56.206275 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-23 00:58:56.206286 | orchestrator | Sunday 23 November 2025 00:55:56 +0000 (0:00:03.346) 0:00:06.524 ******* 2025-11-23 00:58:56.206321 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 00:58:56.206333 | orchestrator | 2025-11-23 00:58:56.206352 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-11-23 00:58:56.206363 | orchestrator | Sunday 23 November 2025 00:55:57 +0000 (0:00:01.475) 0:00:08.000 ******* 2025-11-23 00:58:56.206374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-23 00:58:56.206386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.206398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.206449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.206475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.206488 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.206499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.206517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.206620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206680 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206805 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-23 00:58:56.206822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206853 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.206903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.206993 | orchestrator | 2025-11-23 00:58:56.207004 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-11-23 00:58:56.207016 | orchestrator | Sunday 23 November 2025 00:56:03 +0000 (0:00:05.726) 0:00:13.727 ******* 2025-11-23 00:58:56.207027 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-23 00:58:56.207039 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207074 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-23 00:58:56.207107 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207183 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.207195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207468 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.207479 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.207490 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.207501 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.207512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207546 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.207557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207607 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.207618 | orchestrator | 2025-11-23 00:58:56.207629 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-11-23 00:58:56.207640 | orchestrator | Sunday 23 November 2025 00:56:05 +0000 (0:00:01.439) 0:00:15.167 ******* 2025-11-23 00:58:56.207651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207895 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-23 00:58:56.207905 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.207916 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.207932 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.207952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-23 00:58:56.207965 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.207974 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.207984 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.207994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.208004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.208014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.208023 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.208043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.208053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.208074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.208084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.208094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-23 00:58:56.208104 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.208114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.208144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.208155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.208195 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.208213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-23 00:58:56.208235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.208261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-23 00:58:56.208279 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.208317 | orchestrator | 2025-11-23 00:58:56.208333 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-11-23 00:58:56.208350 | orchestrator | Sunday 23 November 2025 00:56:06 +0000 (0:00:01.880) 0:00:17.047 ******* 2025-11-23 00:58:56.208366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.208377 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-23 00:58:56.208387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.208405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.208415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.208425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.208446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.208457 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.208467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208544 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208591 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-23 00:58:56.208602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208667 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.208694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.208723 | orchestrator | 2025-11-23 00:58:56.208737 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-11-23 00:58:56.208747 | orchestrator | Sunday 23 November 2025 00:56:12 +0000 (0:00:06.010) 0:00:23.057 ******* 2025-11-23 00:58:56.208757 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:58:56.208767 | orchestrator | 2025-11-23 00:58:56.208777 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-11-23 00:58:56.208792 | orchestrator | Sunday 23 November 2025 00:56:13 +0000 (0:00:00.872) 0:00:23.930 ******* 2025-11-23 00:58:56.208802 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1310784, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7838714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208813 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1310784, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7838714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208829 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1310784, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7838714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208840 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1310784, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7838714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208850 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1310784, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7838714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1310784, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7838714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208880 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1310784, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7838714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.208891 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311595, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0049813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208901 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311595, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0049813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208918 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311595, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0049813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208928 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1310780, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208938 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311595, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0049813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208948 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311595, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0049813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208966 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311595, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0049813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208977 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1310780, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.208992 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1310780, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209002 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1310780, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209013 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1310780, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209023 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1310780, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209033 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311340, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209053 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311340, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209064 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311595, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0049813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.209081 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311340, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209091 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1310765, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7809386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209101 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311340, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209111 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311340, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209121 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311340, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209135 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1310765, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7809386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209152 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1310765, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7809386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209169 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1310765, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7809386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209180 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1310786, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7839386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209190 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1310765, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7809386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209200 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1310786, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7839386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209210 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1310786, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7839386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209224 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311337, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209240 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311337, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209256 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1310786, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7839386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209266 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1310788, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7843142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209276 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1310765, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7809386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209286 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1310788, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7843142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209313 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311337, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209328 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1310781, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209355 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1310786, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7839386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209366 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1310786, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7839386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209376 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311592, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0039408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209386 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311337, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209396 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1310781, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209406 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310755, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7746913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1310788, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7843142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311592, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0039408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209626 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311612, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0069408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209636 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1310780, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.209647 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1310781, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209656 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311592, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0039408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209667 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310755, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7746913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209682 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310755, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7746913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209706 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311590, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.003494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209717 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311337, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209727 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1310788, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7843142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311612, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0069408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209748 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311612, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0069408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209758 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311337, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209772 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311590, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.003494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209795 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1310788, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7843142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209806 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311590, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.003494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209816 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310773, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7824602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209826 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310773, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7824602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209836 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1310760, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7779386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209846 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1310781, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209866 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1310781, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209882 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1310788, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7843142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209892 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310773, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7824602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209902 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311340, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.209913 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1310760, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7779386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209923 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1310792, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7849388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209932 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1310781, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209954 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311592, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0039408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209971 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1310760, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7779386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209982 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311592, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0039408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.209992 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311592, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0039408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210002 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310755, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7746913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210012 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1310792, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7849388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210070 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1310790, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.784813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210092 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1310765, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7809386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210110 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1310792, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7849388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210121 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311606, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0066442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210131 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.210142 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311612, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0069408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210152 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310755, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7746913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210162 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310755, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7746913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210178 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1310790, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.784813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210193 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311590, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.003494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210208 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1310790, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.784813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210220 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311612, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0069408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210230 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311612, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0069408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210240 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311606, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0066442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210252 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.210263 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310773, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7824602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210283 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311606, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0066442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210312 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.210328 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1310760, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7779386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210346 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311590, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.003494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210358 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1310792, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7849388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210369 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310773, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7824602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210380 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311590, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.003494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1310760, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7779386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210410 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1310786, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7839386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210426 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310773, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7824602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210443 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1310792, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7849388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210455 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1310790, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.784813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210466 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1310790, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.784813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1310760, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7779386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210495 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311606, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0066442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210506 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.210517 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1310792, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7849388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210533 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311337, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.9559405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210549 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311606, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0066442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210561 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.210572 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1310790, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.784813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210583 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311606, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0066442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-23 00:58:56.210595 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.210606 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1310788, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7843142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210622 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1310781, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7829387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210632 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311592, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0039408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210646 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310755, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7746913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210661 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311612, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0069408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210672 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311590, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.003494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210682 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1310773, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7824602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210697 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1310760, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7779386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210707 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1310792, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7849388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210717 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1310790, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.784813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210732 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311606, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856902.0066442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-23 00:58:56.210742 | orchestrator | 2025-11-23 00:58:56.210752 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-11-23 00:58:56.210762 | orchestrator | Sunday 23 November 2025 00:56:39 +0000 (0:00:25.312) 0:00:49.243 ******* 2025-11-23 00:58:56.210772 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:58:56.210782 | orchestrator | 2025-11-23 00:58:56.210797 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-11-23 00:58:56.210807 | orchestrator | Sunday 23 November 2025 00:56:39 +0000 (0:00:00.655) 0:00:49.898 ******* 2025-11-23 00:58:56.210817 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.210828 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.210838 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-11-23 00:58:56.210848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.210858 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-11-23 00:58:56.210867 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.210877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.210887 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-11-23 00:58:56.210897 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.210906 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-11-23 00:58:56.210916 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.210931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.210940 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-11-23 00:58:56.210950 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.210960 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-11-23 00:58:56.210970 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.210979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.210989 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-11-23 00:58:56.210998 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.211008 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-11-23 00:58:56.211018 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.211027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.211037 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-11-23 00:58:56.211047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.211056 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-11-23 00:58:56.211066 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.211075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.211085 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-11-23 00:58:56.211094 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.211104 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-11-23 00:58:56.211113 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.211123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.211133 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-11-23 00:58:56.211142 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-23 00:58:56.211152 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-11-23 00:58:56.211162 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 00:58:56.211172 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:58:56.211181 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-23 00:58:56.211191 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-23 00:58:56.211200 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-23 00:58:56.211210 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-23 00:58:56.211219 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-23 00:58:56.211229 | orchestrator | 2025-11-23 00:58:56.211239 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-11-23 00:58:56.211248 | orchestrator | Sunday 23 November 2025 00:56:42 +0000 (0:00:02.249) 0:00:52.147 ******* 2025-11-23 00:58:56.211258 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-23 00:58:56.211268 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-23 00:58:56.211277 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.211287 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.211323 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-23 00:58:56.211333 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.211343 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-23 00:58:56.211353 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.211363 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-23 00:58:56.211372 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.211382 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-23 00:58:56.211399 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.211413 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-11-23 00:58:56.211424 | orchestrator | 2025-11-23 00:58:56.211433 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-11-23 00:58:56.211443 | orchestrator | Sunday 23 November 2025 00:56:55 +0000 (0:00:13.257) 0:01:05.405 ******* 2025-11-23 00:58:56.211458 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-23 00:58:56.211468 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-23 00:58:56.211478 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.211488 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.211497 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-23 00:58:56.211507 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.211516 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-23 00:58:56.211526 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.211535 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-23 00:58:56.211545 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.211554 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-23 00:58:56.211564 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.211573 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-11-23 00:58:56.211583 | orchestrator | 2025-11-23 00:58:56.211593 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-11-23 00:58:56.211602 | orchestrator | Sunday 23 November 2025 00:56:59 +0000 (0:00:03.915) 0:01:09.321 ******* 2025-11-23 00:58:56.211612 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-23 00:58:56.211622 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-23 00:58:56.211632 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.211642 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.211652 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-23 00:58:56.211661 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.211671 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-23 00:58:56.211680 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.211690 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-11-23 00:58:56.211700 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-23 00:58:56.211710 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.211720 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-23 00:58:56.211729 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.211739 | orchestrator | 2025-11-23 00:58:56.211748 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-11-23 00:58:56.211758 | orchestrator | Sunday 23 November 2025 00:57:01 +0000 (0:00:01.849) 0:01:11.171 ******* 2025-11-23 00:58:56.211768 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:58:56.211777 | orchestrator | 2025-11-23 00:58:56.211787 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-11-23 00:58:56.211804 | orchestrator | Sunday 23 November 2025 00:57:01 +0000 (0:00:00.557) 0:01:11.728 ******* 2025-11-23 00:58:56.211813 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.211823 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.211832 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.211842 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.211851 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.211860 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.211870 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.211880 | orchestrator | 2025-11-23 00:58:56.211889 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-11-23 00:58:56.211899 | orchestrator | Sunday 23 November 2025 00:57:02 +0000 (0:00:00.512) 0:01:12.241 ******* 2025-11-23 00:58:56.211909 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.211918 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.211928 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.211937 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.211946 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:56.211956 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:56.211965 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:56.211974 | orchestrator | 2025-11-23 00:58:56.211984 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-11-23 00:58:56.211993 | orchestrator | Sunday 23 November 2025 00:57:04 +0000 (0:00:02.522) 0:01:14.763 ******* 2025-11-23 00:58:56.212003 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-23 00:58:56.212012 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.212022 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-23 00:58:56.212036 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-23 00:58:56.212046 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.212055 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.212065 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-23 00:58:56.212075 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.212088 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-23 00:58:56.212099 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.212108 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-23 00:58:56.212118 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.212127 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-23 00:58:56.212137 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.212146 | orchestrator | 2025-11-23 00:58:56.212156 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-11-23 00:58:56.212173 | orchestrator | Sunday 23 November 2025 00:57:06 +0000 (0:00:02.156) 0:01:16.919 ******* 2025-11-23 00:58:56.212190 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-23 00:58:56.212207 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.212223 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-23 00:58:56.212239 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-11-23 00:58:56.212256 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.212273 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-23 00:58:56.212291 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.212338 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-23 00:58:56.212366 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.212376 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-23 00:58:56.212386 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.212396 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-23 00:58:56.212405 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.212415 | orchestrator | 2025-11-23 00:58:56.212424 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-11-23 00:58:56.212434 | orchestrator | Sunday 23 November 2025 00:57:08 +0000 (0:00:01.979) 0:01:18.899 ******* 2025-11-23 00:58:56.212444 | orchestrator | [WARNING]: Skipped 2025-11-23 00:58:56.212454 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-11-23 00:58:56.212464 | orchestrator | due to this access issue: 2025-11-23 00:58:56.212474 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-11-23 00:58:56.212483 | orchestrator | not a directory 2025-11-23 00:58:56.212493 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-23 00:58:56.212503 | orchestrator | 2025-11-23 00:58:56.212513 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-11-23 00:58:56.212522 | orchestrator | Sunday 23 November 2025 00:57:09 +0000 (0:00:00.813) 0:01:19.712 ******* 2025-11-23 00:58:56.212532 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.212542 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.212551 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.212561 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.212570 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.212580 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.212589 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.212599 | orchestrator | 2025-11-23 00:58:56.212609 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-11-23 00:58:56.212619 | orchestrator | Sunday 23 November 2025 00:57:10 +0000 (0:00:00.800) 0:01:20.512 ******* 2025-11-23 00:58:56.212628 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.212638 | orchestrator | skipping: [testbed-node-0] 2025-11-23 00:58:56.212648 | orchestrator | skipping: [testbed-node-1] 2025-11-23 00:58:56.212657 | orchestrator | skipping: [testbed-node-2] 2025-11-23 00:58:56.212667 | orchestrator | skipping: [testbed-node-3] 2025-11-23 00:58:56.212676 | orchestrator | skipping: [testbed-node-4] 2025-11-23 00:58:56.212686 | orchestrator | skipping: [testbed-node-5] 2025-11-23 00:58:56.212696 | orchestrator | 2025-11-23 00:58:56.212705 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-11-23 00:58:56.212715 | orchestrator | Sunday 23 November 2025 00:57:11 +0000 (0:00:00.773) 0:01:21.286 ******* 2025-11-23 00:58:56.212732 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-23 00:58:56.212751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.212768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.212779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.212789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.212799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.212809 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.212819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-23 00:58:56.212834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.212855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.212866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.212877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.212887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.212901 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.212918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.212937 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-23 00:58:56.212961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.212972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.212982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.212992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.213002 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.213013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.213023 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.213037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.213058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.213069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-23 00:58:56.213079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.213090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.213100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-23 00:58:56.213109 | orchestrator | 2025-11-23 00:58:56.213119 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-11-23 00:58:56.213129 | orchestrator | Sunday 23 November 2025 00:57:15 +0000 (0:00:04.608) 0:01:25.894 ******* 2025-11-23 00:58:56.213139 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-23 00:58:56.213148 | orchestrator | skipping: [testbed-manager] 2025-11-23 00:58:56.213158 | orchestrator | 2025-11-23 00:58:56.213168 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-23 00:58:56.213177 | orchestrator | Sunday 23 November 2025 00:57:17 +0000 (0:00:01.586) 0:01:27.481 ******* 2025-11-23 00:58:56.213187 | orchestrator | 2025-11-23 00:58:56.213196 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-23 00:58:56.213206 | orchestrator | Sunday 23 November 2025 00:57:17 +0000 (0:00:00.195) 0:01:27.676 ******* 2025-11-23 00:58:56.213215 | orchestrator | 2025-11-23 00:58:56.213231 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-23 00:58:56.213241 | orchestrator | Sunday 23 November 2025 00:57:17 +0000 (0:00:00.128) 0:01:27.805 ******* 2025-11-23 00:58:56.213251 | orchestrator | 2025-11-23 00:58:56.213260 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-23 00:58:56.213269 | orchestrator | Sunday 23 November 2025 00:57:17 +0000 (0:00:00.135) 0:01:27.940 ******* 2025-11-23 00:58:56.213279 | orchestrator | 2025-11-23 00:58:56.213288 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-23 00:58:56.213316 | orchestrator | Sunday 23 November 2025 00:57:18 +0000 (0:00:00.255) 0:01:28.196 ******* 2025-11-23 00:58:56.213326 | orchestrator | 2025-11-23 00:58:56.213335 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-23 00:58:56.213345 | orchestrator | Sunday 23 November 2025 00:57:18 +0000 (0:00:00.067) 0:01:28.264 ******* 2025-11-23 00:58:56.213354 | orchestrator | 2025-11-23 00:58:56.213364 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-23 00:58:56.213374 | orchestrator | Sunday 23 November 2025 00:57:18 +0000 (0:00:00.076) 0:01:28.340 ******* 2025-11-23 00:58:56.213383 | orchestrator | 2025-11-23 00:58:56.213397 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-11-23 00:58:56.213424 | orchestrator | Sunday 23 November 2025 00:57:18 +0000 (0:00:00.081) 0:01:28.422 ******* 2025-11-23 00:58:56.213434 | orchestrator | changed: [testbed-manager] 2025-11-23 00:58:56.213443 | orchestrator | 2025-11-23 00:58:56.213453 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-11-23 00:58:56.213468 | orchestrator | Sunday 23 November 2025 00:57:33 +0000 (0:00:14.882) 0:01:43.304 ******* 2025-11-23 00:58:56.213477 | orchestrator | changed: [testbed-manager] 2025-11-23 00:58:56.213487 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:56.213497 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:58:56.213506 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:58:56.213516 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:56.213525 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:58:56.213535 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:56.213544 | orchestrator | 2025-11-23 00:58:56.213554 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-11-23 00:58:56.213563 | orchestrator | Sunday 23 November 2025 00:57:48 +0000 (0:00:15.153) 0:01:58.458 ******* 2025-11-23 00:58:56.213573 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:56.213582 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:56.213592 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:56.213601 | orchestrator | 2025-11-23 00:58:56.213611 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-11-23 00:58:56.213621 | orchestrator | Sunday 23 November 2025 00:57:58 +0000 (0:00:10.004) 0:02:08.462 ******* 2025-11-23 00:58:56.213630 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:56.213639 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:56.213648 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:56.213658 | orchestrator | 2025-11-23 00:58:56.213667 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-11-23 00:58:56.213677 | orchestrator | Sunday 23 November 2025 00:58:08 +0000 (0:00:09.972) 0:02:18.435 ******* 2025-11-23 00:58:56.213686 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:56.213696 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:58:56.213705 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:56.213715 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:56.213724 | orchestrator | changed: [testbed-manager] 2025-11-23 00:58:56.213733 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:58:56.213743 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:58:56.213752 | orchestrator | 2025-11-23 00:58:56.213762 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-11-23 00:58:56.213771 | orchestrator | Sunday 23 November 2025 00:58:23 +0000 (0:00:14.675) 0:02:33.110 ******* 2025-11-23 00:58:56.213789 | orchestrator | changed: [testbed-manager] 2025-11-23 00:58:56.213798 | orchestrator | 2025-11-23 00:58:56.213808 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-11-23 00:58:56.213817 | orchestrator | Sunday 23 November 2025 00:58:28 +0000 (0:00:05.710) 0:02:38.821 ******* 2025-11-23 00:58:56.213827 | orchestrator | changed: [testbed-node-2] 2025-11-23 00:58:56.213836 | orchestrator | changed: [testbed-node-1] 2025-11-23 00:58:56.213846 | orchestrator | changed: [testbed-node-0] 2025-11-23 00:58:56.213855 | orchestrator | 2025-11-23 00:58:56.213865 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-11-23 00:58:56.213875 | orchestrator | Sunday 23 November 2025 00:58:39 +0000 (0:00:10.673) 0:02:49.494 ******* 2025-11-23 00:58:56.213885 | orchestrator | changed: [testbed-manager] 2025-11-23 00:58:56.213894 | orchestrator | 2025-11-23 00:58:56.213904 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-11-23 00:58:56.213913 | orchestrator | Sunday 23 November 2025 00:58:44 +0000 (0:00:04.886) 0:02:54.380 ******* 2025-11-23 00:58:56.213923 | orchestrator | changed: [testbed-node-5] 2025-11-23 00:58:56.213932 | orchestrator | changed: [testbed-node-3] 2025-11-23 00:58:56.213942 | orchestrator | changed: [testbed-node-4] 2025-11-23 00:58:56.213952 | orchestrator | 2025-11-23 00:58:56.213961 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 00:58:56.213971 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-23 00:58:56.213981 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-23 00:58:56.213990 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-23 00:58:56.214000 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-23 00:58:56.214009 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-23 00:58:56.214049 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-23 00:58:56.214059 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-23 00:58:56.214069 | orchestrator | 2025-11-23 00:58:56.214078 | orchestrator | 2025-11-23 00:58:56.214088 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 00:58:56.214098 | orchestrator | Sunday 23 November 2025 00:58:54 +0000 (0:00:10.216) 0:03:04.597 ******* 2025-11-23 00:58:56.214107 | orchestrator | =============================================================================== 2025-11-23 00:58:56.214117 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.31s 2025-11-23 00:58:56.214131 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.15s 2025-11-23 00:58:56.214142 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.88s 2025-11-23 00:58:56.214151 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.68s 2025-11-23 00:58:56.214161 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.26s 2025-11-23 00:58:56.214177 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.67s 2025-11-23 00:58:56.214187 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.22s 2025-11-23 00:58:56.214196 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.00s 2025-11-23 00:58:56.214206 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.97s 2025-11-23 00:58:56.214222 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.01s 2025-11-23 00:58:56.214231 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.73s 2025-11-23 00:58:56.214241 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 5.71s 2025-11-23 00:58:56.214250 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.89s 2025-11-23 00:58:56.214260 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.61s 2025-11-23 00:58:56.214269 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.92s 2025-11-23 00:58:56.214279 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.35s 2025-11-23 00:58:56.214289 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.52s 2025-11-23 00:58:56.214339 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.25s 2025-11-23 00:58:56.214349 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.16s 2025-11-23 00:58:56.214358 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.98s 2025-11-23 00:58:56.214368 | orchestrator | 2025-11-23 00:58:56 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:58:56.214378 | orchestrator | 2025-11-23 00:58:56 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:58:56.214388 | orchestrator | 2025-11-23 00:58:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:58:59.249778 | orchestrator | 2025-11-23 00:58:59 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:58:59.250226 | orchestrator | 2025-11-23 00:58:59 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:58:59.251042 | orchestrator | 2025-11-23 00:58:59 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:58:59.252528 | orchestrator | 2025-11-23 00:58:59 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:58:59.252576 | orchestrator | 2025-11-23 00:58:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:02.302483 | orchestrator | 2025-11-23 00:59:02 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:02.303901 | orchestrator | 2025-11-23 00:59:02 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:02.305024 | orchestrator | 2025-11-23 00:59:02 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:02.306219 | orchestrator | 2025-11-23 00:59:02 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:02.306659 | orchestrator | 2025-11-23 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:05.356568 | orchestrator | 2025-11-23 00:59:05 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:05.358759 | orchestrator | 2025-11-23 00:59:05 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:05.360714 | orchestrator | 2025-11-23 00:59:05 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:05.362137 | orchestrator | 2025-11-23 00:59:05 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:05.362179 | orchestrator | 2025-11-23 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:08.406361 | orchestrator | 2025-11-23 00:59:08 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:08.408416 | orchestrator | 2025-11-23 00:59:08 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:08.411112 | orchestrator | 2025-11-23 00:59:08 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:08.413173 | orchestrator | 2025-11-23 00:59:08 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:08.413216 | orchestrator | 2025-11-23 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:11.448167 | orchestrator | 2025-11-23 00:59:11 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:11.450653 | orchestrator | 2025-11-23 00:59:11 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:11.450701 | orchestrator | 2025-11-23 00:59:11 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:11.450714 | orchestrator | 2025-11-23 00:59:11 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:11.450727 | orchestrator | 2025-11-23 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:14.484064 | orchestrator | 2025-11-23 00:59:14 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:14.484158 | orchestrator | 2025-11-23 00:59:14 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:14.484986 | orchestrator | 2025-11-23 00:59:14 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:14.485828 | orchestrator | 2025-11-23 00:59:14 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:14.486162 | orchestrator | 2025-11-23 00:59:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:17.529441 | orchestrator | 2025-11-23 00:59:17 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:17.530414 | orchestrator | 2025-11-23 00:59:17 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:17.531513 | orchestrator | 2025-11-23 00:59:17 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:17.532940 | orchestrator | 2025-11-23 00:59:17 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:17.532973 | orchestrator | 2025-11-23 00:59:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:20.574658 | orchestrator | 2025-11-23 00:59:20 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:20.576894 | orchestrator | 2025-11-23 00:59:20 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:20.579252 | orchestrator | 2025-11-23 00:59:20 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:20.581452 | orchestrator | 2025-11-23 00:59:20 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:20.581509 | orchestrator | 2025-11-23 00:59:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:23.623922 | orchestrator | 2025-11-23 00:59:23 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:23.626156 | orchestrator | 2025-11-23 00:59:23 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:23.628216 | orchestrator | 2025-11-23 00:59:23 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:23.630379 | orchestrator | 2025-11-23 00:59:23 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:23.630427 | orchestrator | 2025-11-23 00:59:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:26.663460 | orchestrator | 2025-11-23 00:59:26 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:26.664329 | orchestrator | 2025-11-23 00:59:26 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:26.672121 | orchestrator | 2025-11-23 00:59:26 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:26.672729 | orchestrator | 2025-11-23 00:59:26 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:26.673777 | orchestrator | 2025-11-23 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:29.710113 | orchestrator | 2025-11-23 00:59:29 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:29.710205 | orchestrator | 2025-11-23 00:59:29 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:29.710967 | orchestrator | 2025-11-23 00:59:29 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:29.711642 | orchestrator | 2025-11-23 00:59:29 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:29.711799 | orchestrator | 2025-11-23 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:32.757102 | orchestrator | 2025-11-23 00:59:32 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:32.758744 | orchestrator | 2025-11-23 00:59:32 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:32.760850 | orchestrator | 2025-11-23 00:59:32 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:32.762666 | orchestrator | 2025-11-23 00:59:32 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:32.762701 | orchestrator | 2025-11-23 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:35.798002 | orchestrator | 2025-11-23 00:59:35 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:35.798338 | orchestrator | 2025-11-23 00:59:35 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:35.799233 | orchestrator | 2025-11-23 00:59:35 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:35.800274 | orchestrator | 2025-11-23 00:59:35 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:35.800355 | orchestrator | 2025-11-23 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:38.832646 | orchestrator | 2025-11-23 00:59:38 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:38.832967 | orchestrator | 2025-11-23 00:59:38 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:38.833654 | orchestrator | 2025-11-23 00:59:38 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:38.834219 | orchestrator | 2025-11-23 00:59:38 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:38.834253 | orchestrator | 2025-11-23 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:41.862082 | orchestrator | 2025-11-23 00:59:41 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:41.862532 | orchestrator | 2025-11-23 00:59:41 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:41.863497 | orchestrator | 2025-11-23 00:59:41 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:41.864393 | orchestrator | 2025-11-23 00:59:41 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:41.864431 | orchestrator | 2025-11-23 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:44.894003 | orchestrator | 2025-11-23 00:59:44 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:44.894189 | orchestrator | 2025-11-23 00:59:44 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:44.894906 | orchestrator | 2025-11-23 00:59:44 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:44.896630 | orchestrator | 2025-11-23 00:59:44 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:44.896686 | orchestrator | 2025-11-23 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:47.924246 | orchestrator | 2025-11-23 00:59:47 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:47.924383 | orchestrator | 2025-11-23 00:59:47 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:47.925093 | orchestrator | 2025-11-23 00:59:47 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:47.925861 | orchestrator | 2025-11-23 00:59:47 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:47.925884 | orchestrator | 2025-11-23 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:50.950917 | orchestrator | 2025-11-23 00:59:50 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:50.951550 | orchestrator | 2025-11-23 00:59:50 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:50.952090 | orchestrator | 2025-11-23 00:59:50 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:50.952781 | orchestrator | 2025-11-23 00:59:50 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:50.952813 | orchestrator | 2025-11-23 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:53.998334 | orchestrator | 2025-11-23 00:59:53 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:54.000557 | orchestrator | 2025-11-23 00:59:53 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:54.003770 | orchestrator | 2025-11-23 00:59:54 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:54.005103 | orchestrator | 2025-11-23 00:59:54 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:54.005152 | orchestrator | 2025-11-23 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 00:59:57.029950 | orchestrator | 2025-11-23 00:59:57 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 00:59:57.032548 | orchestrator | 2025-11-23 00:59:57 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 00:59:57.032570 | orchestrator | 2025-11-23 00:59:57 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 00:59:57.032574 | orchestrator | 2025-11-23 00:59:57 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 00:59:57.032579 | orchestrator | 2025-11-23 00:59:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:00.071477 | orchestrator | 2025-11-23 01:00:00 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:00.071591 | orchestrator | 2025-11-23 01:00:00 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:00.071886 | orchestrator | 2025-11-23 01:00:00 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:00.072731 | orchestrator | 2025-11-23 01:00:00 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:00.072761 | orchestrator | 2025-11-23 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:03.107002 | orchestrator | 2025-11-23 01:00:03 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:03.108443 | orchestrator | 2025-11-23 01:00:03 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:03.108951 | orchestrator | 2025-11-23 01:00:03 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:03.109544 | orchestrator | 2025-11-23 01:00:03 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:03.109609 | orchestrator | 2025-11-23 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:06.135616 | orchestrator | 2025-11-23 01:00:06 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:06.135791 | orchestrator | 2025-11-23 01:00:06 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:06.137529 | orchestrator | 2025-11-23 01:00:06 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:06.137956 | orchestrator | 2025-11-23 01:00:06 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:06.137978 | orchestrator | 2025-11-23 01:00:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:09.161627 | orchestrator | 2025-11-23 01:00:09 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:09.162636 | orchestrator | 2025-11-23 01:00:09 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:09.162884 | orchestrator | 2025-11-23 01:00:09 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:09.163499 | orchestrator | 2025-11-23 01:00:09 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:09.163557 | orchestrator | 2025-11-23 01:00:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:12.188667 | orchestrator | 2025-11-23 01:00:12 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:12.189006 | orchestrator | 2025-11-23 01:00:12 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:12.189747 | orchestrator | 2025-11-23 01:00:12 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:12.191033 | orchestrator | 2025-11-23 01:00:12 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:12.191066 | orchestrator | 2025-11-23 01:00:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:15.223186 | orchestrator | 2025-11-23 01:00:15 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:15.224750 | orchestrator | 2025-11-23 01:00:15 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:15.224771 | orchestrator | 2025-11-23 01:00:15 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:15.224777 | orchestrator | 2025-11-23 01:00:15 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:15.224785 | orchestrator | 2025-11-23 01:00:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:18.247086 | orchestrator | 2025-11-23 01:00:18 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:18.247222 | orchestrator | 2025-11-23 01:00:18 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:18.247785 | orchestrator | 2025-11-23 01:00:18 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:18.248252 | orchestrator | 2025-11-23 01:00:18 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:18.248385 | orchestrator | 2025-11-23 01:00:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:21.276350 | orchestrator | 2025-11-23 01:00:21 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:21.278230 | orchestrator | 2025-11-23 01:00:21 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:21.281750 | orchestrator | 2025-11-23 01:00:21 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:21.283197 | orchestrator | 2025-11-23 01:00:21 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:21.283454 | orchestrator | 2025-11-23 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:24.317437 | orchestrator | 2025-11-23 01:00:24 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:24.317546 | orchestrator | 2025-11-23 01:00:24 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:24.318715 | orchestrator | 2025-11-23 01:00:24 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:24.319686 | orchestrator | 2025-11-23 01:00:24 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:24.319777 | orchestrator | 2025-11-23 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:27.349491 | orchestrator | 2025-11-23 01:00:27 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state STARTED 2025-11-23 01:00:27.350423 | orchestrator | 2025-11-23 01:00:27 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:27.350956 | orchestrator | 2025-11-23 01:00:27 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:27.352165 | orchestrator | 2025-11-23 01:00:27 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:27.352190 | orchestrator | 2025-11-23 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:30.388584 | orchestrator | 2025-11-23 01:00:30 | INFO  | Task fd7dbec1-7f4f-4acc-84fc-ff50d1c6f973 is in state SUCCESS 2025-11-23 01:00:30.389503 | orchestrator | 2025-11-23 01:00:30.389572 | orchestrator | 2025-11-23 01:00:30.389583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:00:30.389593 | orchestrator | 2025-11-23 01:00:30.389601 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:00:30.389610 | orchestrator | Sunday 23 November 2025 00:56:23 +0000 (0:00:00.275) 0:00:00.275 ******* 2025-11-23 01:00:30.389618 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:00:30.389627 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:00:30.389635 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:00:30.389643 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:00:30.389652 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:00:30.389660 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:00:30.389669 | orchestrator | 2025-11-23 01:00:30.389677 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:00:30.389686 | orchestrator | Sunday 23 November 2025 00:56:24 +0000 (0:00:00.862) 0:00:01.137 ******* 2025-11-23 01:00:30.389695 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-11-23 01:00:30.389756 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-11-23 01:00:30.389766 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-11-23 01:00:30.389775 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-11-23 01:00:30.389808 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-11-23 01:00:30.389819 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-11-23 01:00:30.389827 | orchestrator | 2025-11-23 01:00:30.389836 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-11-23 01:00:30.389845 | orchestrator | 2025-11-23 01:00:30.389853 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-23 01:00:30.389988 | orchestrator | Sunday 23 November 2025 00:56:25 +0000 (0:00:00.625) 0:00:01.762 ******* 2025-11-23 01:00:30.390011 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 01:00:30.390066 | orchestrator | 2025-11-23 01:00:30.390075 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-11-23 01:00:30.390083 | orchestrator | Sunday 23 November 2025 00:56:26 +0000 (0:00:01.298) 0:00:03.061 ******* 2025-11-23 01:00:30.390093 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-11-23 01:00:30.390102 | orchestrator | 2025-11-23 01:00:30.390110 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-11-23 01:00:30.390119 | orchestrator | Sunday 23 November 2025 00:56:30 +0000 (0:00:03.872) 0:00:06.933 ******* 2025-11-23 01:00:30.390128 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-11-23 01:00:30.390137 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-11-23 01:00:30.390146 | orchestrator | 2025-11-23 01:00:30.390154 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-11-23 01:00:30.390163 | orchestrator | Sunday 23 November 2025 00:56:37 +0000 (0:00:06.896) 0:00:13.829 ******* 2025-11-23 01:00:30.390171 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:00:30.390226 | orchestrator | 2025-11-23 01:00:30.390235 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-11-23 01:00:30.390244 | orchestrator | Sunday 23 November 2025 00:56:40 +0000 (0:00:03.485) 0:00:17.314 ******* 2025-11-23 01:00:30.390252 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:00:30.390261 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-11-23 01:00:30.390269 | orchestrator | 2025-11-23 01:00:30.390278 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-11-23 01:00:30.390390 | orchestrator | Sunday 23 November 2025 00:56:44 +0000 (0:00:03.957) 0:00:21.272 ******* 2025-11-23 01:00:30.390405 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:00:30.390414 | orchestrator | 2025-11-23 01:00:30.390423 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-11-23 01:00:30.390431 | orchestrator | Sunday 23 November 2025 00:56:48 +0000 (0:00:03.597) 0:00:24.870 ******* 2025-11-23 01:00:30.390440 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-11-23 01:00:30.390448 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-11-23 01:00:30.390457 | orchestrator | 2025-11-23 01:00:30.390465 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-11-23 01:00:30.390474 | orchestrator | Sunday 23 November 2025 00:56:57 +0000 (0:00:08.765) 0:00:33.635 ******* 2025-11-23 01:00:30.390486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.390525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.390542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.390561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.390664 | orchestrator | 2025-11-23 01:00:30.390678 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-23 01:00:30.390687 | orchestrator | Sunday 23 November 2025 00:56:59 +0000 (0:00:02.573) 0:00:36.208 ******* 2025-11-23 01:00:30.390696 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.390704 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.390713 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.390722 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.390730 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.390739 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.390748 | orchestrator | 2025-11-23 01:00:30.390756 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-23 01:00:30.390765 | orchestrator | Sunday 23 November 2025 00:57:00 +0000 (0:00:00.999) 0:00:37.208 ******* 2025-11-23 01:00:30.390773 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.390782 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.390790 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.390799 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 01:00:30.390808 | orchestrator | 2025-11-23 01:00:30.390816 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-11-23 01:00:30.390825 | orchestrator | Sunday 23 November 2025 00:57:01 +0000 (0:00:01.078) 0:00:38.287 ******* 2025-11-23 01:00:30.390833 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-11-23 01:00:30.390842 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-11-23 01:00:30.390851 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-11-23 01:00:30.390859 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-11-23 01:00:30.390868 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-11-23 01:00:30.390880 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-11-23 01:00:30.390889 | orchestrator | 2025-11-23 01:00:30.390898 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-11-23 01:00:30.390906 | orchestrator | Sunday 23 November 2025 00:57:03 +0000 (0:00:02.058) 0:00:40.345 ******* 2025-11-23 01:00:30.390916 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-23 01:00:30.390940 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-23 01:00:30.390949 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-23 01:00:30.390974 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-23 01:00:30.390988 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-23 01:00:30.390997 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-23 01:00:30.391007 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-23 01:00:30.391023 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-23 01:00:30.391038 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-23 01:00:30.391052 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-23 01:00:30.391062 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-23 01:00:30.391079 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-23 01:00:30.391088 | orchestrator | 2025-11-23 01:00:30.391097 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-11-23 01:00:30.391106 | orchestrator | Sunday 23 November 2025 00:57:07 +0000 (0:00:03.809) 0:00:44.154 ******* 2025-11-23 01:00:30.391114 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 01:00:30.391124 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 01:00:30.391132 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-23 01:00:30.391141 | orchestrator | 2025-11-23 01:00:30.391149 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-11-23 01:00:30.391175 | orchestrator | Sunday 23 November 2025 00:57:09 +0000 (0:00:02.296) 0:00:46.451 ******* 2025-11-23 01:00:30.391184 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-11-23 01:00:30.391193 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-11-23 01:00:30.391201 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-11-23 01:00:30.391210 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-11-23 01:00:30.391218 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-11-23 01:00:30.391231 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-11-23 01:00:30.391240 | orchestrator | 2025-11-23 01:00:30.391249 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-11-23 01:00:30.391258 | orchestrator | Sunday 23 November 2025 00:57:13 +0000 (0:00:03.523) 0:00:49.975 ******* 2025-11-23 01:00:30.391266 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-11-23 01:00:30.391275 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-11-23 01:00:30.391283 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-11-23 01:00:30.391317 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-11-23 01:00:30.391327 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-11-23 01:00:30.391335 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-11-23 01:00:30.391355 | orchestrator | 2025-11-23 01:00:30.391364 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-11-23 01:00:30.391373 | orchestrator | Sunday 23 November 2025 00:57:14 +0000 (0:00:01.199) 0:00:51.174 ******* 2025-11-23 01:00:30.391381 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.391390 | orchestrator | 2025-11-23 01:00:30.391398 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-11-23 01:00:30.391407 | orchestrator | Sunday 23 November 2025 00:57:14 +0000 (0:00:00.109) 0:00:51.284 ******* 2025-11-23 01:00:30.391415 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.391424 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.391432 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.391441 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.391456 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.391465 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.391474 | orchestrator | 2025-11-23 01:00:30.391482 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-23 01:00:30.391496 | orchestrator | Sunday 23 November 2025 00:57:15 +0000 (0:00:00.698) 0:00:51.983 ******* 2025-11-23 01:00:30.391506 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 01:00:30.391516 | orchestrator | 2025-11-23 01:00:30.391525 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-11-23 01:00:30.391533 | orchestrator | Sunday 23 November 2025 00:57:16 +0000 (0:00:01.202) 0:00:53.185 ******* 2025-11-23 01:00:30.391542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.391552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.391567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.391596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.391685 | orchestrator | 2025-11-23 01:00:30.391694 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-11-23 01:00:30.391703 | orchestrator | Sunday 23 November 2025 00:57:19 +0000 (0:00:03.258) 0:00:56.444 ******* 2025-11-23 01:00:30.391712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.391726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391735 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.391744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.391763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391772 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.391781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.391790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391799 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.391808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391837 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.391850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391868 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.391877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391895 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.391904 | orchestrator | 2025-11-23 01:00:30.391912 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-11-23 01:00:30.391929 | orchestrator | Sunday 23 November 2025 00:57:21 +0000 (0:00:01.787) 0:00:58.231 ******* 2025-11-23 01:00:30.391943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.391957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.391966 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.391976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.391985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.391994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392025 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.392034 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.392043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392066 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.392075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392122 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.392130 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.392139 | orchestrator | 2025-11-23 01:00:30.392147 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-11-23 01:00:30.392156 | orchestrator | Sunday 23 November 2025 00:57:23 +0000 (0:00:01.570) 0:00:59.802 ******* 2025-11-23 01:00:30.392169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.392179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.392188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.392207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392329 | orchestrator | 2025-11-23 01:00:30.392338 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-11-23 01:00:30.392347 | orchestrator | Sunday 23 November 2025 00:57:26 +0000 (0:00:03.697) 0:01:03.499 ******* 2025-11-23 01:00:30.392356 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-23 01:00:30.392364 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.392373 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-23 01:00:30.392382 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.392391 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-23 01:00:30.392400 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.392409 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-23 01:00:30.392423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-23 01:00:30.392431 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-23 01:00:30.392440 | orchestrator | 2025-11-23 01:00:30.392449 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-11-23 01:00:30.392457 | orchestrator | Sunday 23 November 2025 00:57:29 +0000 (0:00:02.483) 0:01:05.982 ******* 2025-11-23 01:00:30.392466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.392481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.392502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.392525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.392612 | orchestrator | 2025-11-23 01:00:30.392621 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-11-23 01:00:30.392629 | orchestrator | Sunday 23 November 2025 00:57:40 +0000 (0:00:10.838) 0:01:16.821 ******* 2025-11-23 01:00:30.392643 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.392652 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.392661 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.392670 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:00:30.392678 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:00:30.392687 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:00:30.392695 | orchestrator | 2025-11-23 01:00:30.392704 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-11-23 01:00:30.392713 | orchestrator | Sunday 23 November 2025 00:57:42 +0000 (0:00:02.232) 0:01:19.054 ******* 2025-11-23 01:00:30.392738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.392749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392763 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.392772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.392782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392791 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.392805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-23 01:00:30.392815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392824 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.392967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.392997 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.393007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.393016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.393025 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.393034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.393049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-23 01:00:30.393064 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.393073 | orchestrator | 2025-11-23 01:00:30.393082 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-11-23 01:00:30.393091 | orchestrator | Sunday 23 November 2025 00:57:43 +0000 (0:00:01.162) 0:01:20.216 ******* 2025-11-23 01:00:30.393099 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.393108 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.393117 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.393125 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.393134 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.393142 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.393151 | orchestrator | 2025-11-23 01:00:30.393160 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-11-23 01:00:30.393168 | orchestrator | Sunday 23 November 2025 00:57:44 +0000 (0:00:00.459) 0:01:20.676 ******* 2025-11-23 01:00:30.393264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.393286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.393333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-23 01:00:30.393366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-23 01:00:30.393484 | orchestrator | 2025-11-23 01:00:30.393493 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-23 01:00:30.393502 | orchestrator | Sunday 23 November 2025 00:57:46 +0000 (0:00:02.134) 0:01:22.811 ******* 2025-11-23 01:00:30.393511 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.393519 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:00:30.393528 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:00:30.393536 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:00:30.393545 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:00:30.393554 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:00:30.393562 | orchestrator | 2025-11-23 01:00:30.393571 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-11-23 01:00:30.393579 | orchestrator | Sunday 23 November 2025 00:57:46 +0000 (0:00:00.474) 0:01:23.285 ******* 2025-11-23 01:00:30.393588 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:00:30.393598 | orchestrator | 2025-11-23 01:00:30.393608 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-11-23 01:00:30.393618 | orchestrator | Sunday 23 November 2025 00:57:49 +0000 (0:00:02.351) 0:01:25.637 ******* 2025-11-23 01:00:30.393627 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:00:30.393637 | orchestrator | 2025-11-23 01:00:30.393647 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-11-23 01:00:30.393657 | orchestrator | Sunday 23 November 2025 00:57:51 +0000 (0:00:02.266) 0:01:27.904 ******* 2025-11-23 01:00:30.393666 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:00:30.393675 | orchestrator | 2025-11-23 01:00:30.393684 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-23 01:00:30.393697 | orchestrator | Sunday 23 November 2025 00:58:10 +0000 (0:00:19.532) 0:01:47.436 ******* 2025-11-23 01:00:30.393706 | orchestrator | 2025-11-23 01:00:30.393715 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-23 01:00:30.393723 | orchestrator | Sunday 23 November 2025 00:58:10 +0000 (0:00:00.142) 0:01:47.579 ******* 2025-11-23 01:00:30.393732 | orchestrator | 2025-11-23 01:00:30.393740 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-23 01:00:30.393749 | orchestrator | Sunday 23 November 2025 00:58:11 +0000 (0:00:00.311) 0:01:47.890 ******* 2025-11-23 01:00:30.393757 | orchestrator | 2025-11-23 01:00:30.393766 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-23 01:00:30.393774 | orchestrator | Sunday 23 November 2025 00:58:11 +0000 (0:00:00.065) 0:01:47.955 ******* 2025-11-23 01:00:30.393783 | orchestrator | 2025-11-23 01:00:30.393791 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-23 01:00:30.393800 | orchestrator | Sunday 23 November 2025 00:58:11 +0000 (0:00:00.111) 0:01:48.067 ******* 2025-11-23 01:00:30.393809 | orchestrator | 2025-11-23 01:00:30.393817 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-23 01:00:30.393826 | orchestrator | Sunday 23 November 2025 00:58:11 +0000 (0:00:00.139) 0:01:48.207 ******* 2025-11-23 01:00:30.393834 | orchestrator | 2025-11-23 01:00:30.393843 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-11-23 01:00:30.393856 | orchestrator | Sunday 23 November 2025 00:58:11 +0000 (0:00:00.143) 0:01:48.350 ******* 2025-11-23 01:00:30.393865 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:00:30.393874 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:00:30.393884 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:00:30.393893 | orchestrator | 2025-11-23 01:00:30.393903 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-11-23 01:00:30.393917 | orchestrator | Sunday 23 November 2025 00:58:45 +0000 (0:00:34.207) 0:02:22.557 ******* 2025-11-23 01:00:30.393927 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:00:30.393936 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:00:30.393946 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:00:30.393955 | orchestrator | 2025-11-23 01:00:30.393965 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-11-23 01:00:30.393974 | orchestrator | Sunday 23 November 2025 00:58:56 +0000 (0:00:10.304) 0:02:32.861 ******* 2025-11-23 01:00:30.393984 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:00:30.393993 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:00:30.394003 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:00:30.394012 | orchestrator | 2025-11-23 01:00:30.394077 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-11-23 01:00:30.394088 | orchestrator | Sunday 23 November 2025 01:00:16 +0000 (0:01:20.340) 0:03:53.202 ******* 2025-11-23 01:00:30.394098 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:00:30.394107 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:00:30.394117 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:00:30.394127 | orchestrator | 2025-11-23 01:00:30.394136 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-11-23 01:00:30.394146 | orchestrator | Sunday 23 November 2025 01:00:28 +0000 (0:00:11.708) 0:04:04.910 ******* 2025-11-23 01:00:30.394156 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:00:30.394165 | orchestrator | 2025-11-23 01:00:30.394175 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:00:30.394185 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-23 01:00:30.394195 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-23 01:00:30.394204 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-23 01:00:30.394221 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-23 01:00:30.394230 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-23 01:00:30.394240 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-23 01:00:30.394250 | orchestrator | 2025-11-23 01:00:30.394260 | orchestrator | 2025-11-23 01:00:30.394269 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:00:30.394279 | orchestrator | Sunday 23 November 2025 01:00:29 +0000 (0:00:01.373) 0:04:06.284 ******* 2025-11-23 01:00:30.394308 | orchestrator | =============================================================================== 2025-11-23 01:00:30.394320 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 80.34s 2025-11-23 01:00:30.394330 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 34.21s 2025-11-23 01:00:30.394339 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.53s 2025-11-23 01:00:30.394349 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.71s 2025-11-23 01:00:30.394358 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.84s 2025-11-23 01:00:30.394368 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.30s 2025-11-23 01:00:30.394377 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.77s 2025-11-23 01:00:30.394387 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.90s 2025-11-23 01:00:30.394396 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.96s 2025-11-23 01:00:30.394405 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.87s 2025-11-23 01:00:30.394415 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.81s 2025-11-23 01:00:30.394424 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.70s 2025-11-23 01:00:30.394434 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.60s 2025-11-23 01:00:30.394443 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.52s 2025-11-23 01:00:30.394453 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.49s 2025-11-23 01:00:30.394462 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.26s 2025-11-23 01:00:30.394471 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.57s 2025-11-23 01:00:30.394481 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.48s 2025-11-23 01:00:30.394490 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.35s 2025-11-23 01:00:30.394499 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.30s 2025-11-23 01:00:30.394515 | orchestrator | 2025-11-23 01:00:30 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:30.394525 | orchestrator | 2025-11-23 01:00:30 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:30.394540 | orchestrator | 2025-11-23 01:00:30 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:30.394550 | orchestrator | 2025-11-23 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:33.421457 | orchestrator | 2025-11-23 01:00:33 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:33.423515 | orchestrator | 2025-11-23 01:00:33 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:33.424147 | orchestrator | 2025-11-23 01:00:33 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:33.425683 | orchestrator | 2025-11-23 01:00:33 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:33.425765 | orchestrator | 2025-11-23 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:36.456111 | orchestrator | 2025-11-23 01:00:36 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:36.456223 | orchestrator | 2025-11-23 01:00:36 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:36.461774 | orchestrator | 2025-11-23 01:00:36 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:36.463283 | orchestrator | 2025-11-23 01:00:36 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:36.463364 | orchestrator | 2025-11-23 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:39.504783 | orchestrator | 2025-11-23 01:00:39 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:39.506573 | orchestrator | 2025-11-23 01:00:39 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:39.508352 | orchestrator | 2025-11-23 01:00:39 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:39.508860 | orchestrator | 2025-11-23 01:00:39 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:39.509082 | orchestrator | 2025-11-23 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:42.540282 | orchestrator | 2025-11-23 01:00:42 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:42.540746 | orchestrator | 2025-11-23 01:00:42 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:42.541229 | orchestrator | 2025-11-23 01:00:42 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:42.542096 | orchestrator | 2025-11-23 01:00:42 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:42.542122 | orchestrator | 2025-11-23 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:45.571960 | orchestrator | 2025-11-23 01:00:45 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:45.574256 | orchestrator | 2025-11-23 01:00:45 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:45.577693 | orchestrator | 2025-11-23 01:00:45 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:45.578633 | orchestrator | 2025-11-23 01:00:45 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:45.578681 | orchestrator | 2025-11-23 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:48.609594 | orchestrator | 2025-11-23 01:00:48 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:48.609818 | orchestrator | 2025-11-23 01:00:48 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:48.611170 | orchestrator | 2025-11-23 01:00:48 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:48.611826 | orchestrator | 2025-11-23 01:00:48 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:48.611864 | orchestrator | 2025-11-23 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:51.667844 | orchestrator | 2025-11-23 01:00:51 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:51.668028 | orchestrator | 2025-11-23 01:00:51 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:51.668951 | orchestrator | 2025-11-23 01:00:51 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:51.669585 | orchestrator | 2025-11-23 01:00:51 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:51.669635 | orchestrator | 2025-11-23 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:54.695262 | orchestrator | 2025-11-23 01:00:54 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:54.697700 | orchestrator | 2025-11-23 01:00:54 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:54.698474 | orchestrator | 2025-11-23 01:00:54 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:54.699642 | orchestrator | 2025-11-23 01:00:54 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:54.699929 | orchestrator | 2025-11-23 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:00:57.735711 | orchestrator | 2025-11-23 01:00:57 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:00:57.737467 | orchestrator | 2025-11-23 01:00:57 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:00:57.739068 | orchestrator | 2025-11-23 01:00:57 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:00:57.740430 | orchestrator | 2025-11-23 01:00:57 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:00:57.740542 | orchestrator | 2025-11-23 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:00.783666 | orchestrator | 2025-11-23 01:01:00 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:00.783774 | orchestrator | 2025-11-23 01:01:00 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:00.784494 | orchestrator | 2025-11-23 01:01:00 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state STARTED 2025-11-23 01:01:00.785230 | orchestrator | 2025-11-23 01:01:00 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:00.785704 | orchestrator | 2025-11-23 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:03.831465 | orchestrator | 2025-11-23 01:01:03 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:03.832417 | orchestrator | 2025-11-23 01:01:03 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:03.832472 | orchestrator | 2025-11-23 01:01:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:03.833568 | orchestrator | 2025-11-23 01:01:03 | INFO  | Task 48a9be92-ad74-4f2c-9631-511f755528fa is in state SUCCESS 2025-11-23 01:01:03.834735 | orchestrator | 2025-11-23 01:01:03.834781 | orchestrator | 2025-11-23 01:01:03.834841 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:01:03.834857 | orchestrator | 2025-11-23 01:01:03.834868 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:01:03.834880 | orchestrator | Sunday 23 November 2025 00:58:59 +0000 (0:00:00.225) 0:00:00.225 ******* 2025-11-23 01:01:03.834891 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:01:03.834903 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:01:03.834914 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:01:03.834925 | orchestrator | 2025-11-23 01:01:03.834965 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:01:03.834976 | orchestrator | Sunday 23 November 2025 00:59:00 +0000 (0:00:00.275) 0:00:00.500 ******* 2025-11-23 01:01:03.834987 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-11-23 01:01:03.834999 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-11-23 01:01:03.835009 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-11-23 01:01:03.835020 | orchestrator | 2025-11-23 01:01:03.835031 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-11-23 01:01:03.835042 | orchestrator | 2025-11-23 01:01:03.835053 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-23 01:01:03.835064 | orchestrator | Sunday 23 November 2025 00:59:00 +0000 (0:00:00.389) 0:00:00.890 ******* 2025-11-23 01:01:03.835075 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:01:03.835086 | orchestrator | 2025-11-23 01:01:03.835097 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-11-23 01:01:03.835108 | orchestrator | Sunday 23 November 2025 00:59:01 +0000 (0:00:00.558) 0:00:01.448 ******* 2025-11-23 01:01:03.835119 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-11-23 01:01:03.835130 | orchestrator | 2025-11-23 01:01:03.835141 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-11-23 01:01:03.835151 | orchestrator | Sunday 23 November 2025 00:59:04 +0000 (0:00:03.626) 0:00:05.075 ******* 2025-11-23 01:01:03.835162 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-11-23 01:01:03.835173 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-11-23 01:01:03.835184 | orchestrator | 2025-11-23 01:01:03.835194 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-11-23 01:01:03.835205 | orchestrator | Sunday 23 November 2025 00:59:11 +0000 (0:00:06.745) 0:00:11.821 ******* 2025-11-23 01:01:03.835230 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:01:03.835241 | orchestrator | 2025-11-23 01:01:03.835252 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-11-23 01:01:03.835263 | orchestrator | Sunday 23 November 2025 00:59:14 +0000 (0:00:03.214) 0:00:15.035 ******* 2025-11-23 01:01:03.835273 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:01:03.835284 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-11-23 01:01:03.835323 | orchestrator | 2025-11-23 01:01:03.835337 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-11-23 01:01:03.835350 | orchestrator | Sunday 23 November 2025 00:59:18 +0000 (0:00:04.136) 0:00:19.172 ******* 2025-11-23 01:01:03.835363 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:01:03.835377 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-11-23 01:01:03.835389 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-11-23 01:01:03.835402 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-11-23 01:01:03.835415 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-11-23 01:01:03.835428 | orchestrator | 2025-11-23 01:01:03.835441 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-11-23 01:01:03.835453 | orchestrator | Sunday 23 November 2025 00:59:34 +0000 (0:00:15.517) 0:00:34.689 ******* 2025-11-23 01:01:03.835465 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-11-23 01:01:03.835478 | orchestrator | 2025-11-23 01:01:03.835491 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-11-23 01:01:03.835503 | orchestrator | Sunday 23 November 2025 00:59:38 +0000 (0:00:03.747) 0:00:38.437 ******* 2025-11-23 01:01:03.835520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.835562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.835578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.835602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.835624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.835646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.835667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.835682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.835695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.835708 | orchestrator | 2025-11-23 01:01:03.835719 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-11-23 01:01:03.835730 | orchestrator | Sunday 23 November 2025 00:59:40 +0000 (0:00:02.143) 0:00:40.580 ******* 2025-11-23 01:01:03.835748 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-11-23 01:01:03.835767 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-11-23 01:01:03.835783 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-11-23 01:01:03.835801 | orchestrator | 2025-11-23 01:01:03.835820 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-11-23 01:01:03.835847 | orchestrator | Sunday 23 November 2025 00:59:41 +0000 (0:00:01.309) 0:00:41.890 ******* 2025-11-23 01:01:03.835859 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:01:03.835870 | orchestrator | 2025-11-23 01:01:03.835880 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-11-23 01:01:03.835891 | orchestrator | Sunday 23 November 2025 00:59:41 +0000 (0:00:00.102) 0:00:41.993 ******* 2025-11-23 01:01:03.835902 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:01:03.835912 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:01:03.835923 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:01:03.835934 | orchestrator | 2025-11-23 01:01:03.835944 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-23 01:01:03.835955 | orchestrator | Sunday 23 November 2025 00:59:42 +0000 (0:00:00.399) 0:00:42.393 ******* 2025-11-23 01:01:03.835966 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:01:03.835992 | orchestrator | 2025-11-23 01:01:03.836003 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-11-23 01:01:03.836015 | orchestrator | Sunday 23 November 2025 00:59:43 +0000 (0:00:00.989) 0:00:43.382 ******* 2025-11-23 01:01:03.836035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.836057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.836069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.836099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.836158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.836182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.836194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.836214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.836225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.836237 | orchestrator | 2025-11-23 01:01:03.836248 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-11-23 01:01:03.836259 | orchestrator | Sunday 23 November 2025 00:59:46 +0000 (0:00:03.632) 0:00:47.015 ******* 2025-11-23 01:01:03.836276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.836326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836351 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:01:03.836370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.836381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836404 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:01:03.836421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.836439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836462 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:01:03.836473 | orchestrator | 2025-11-23 01:01:03.836484 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-11-23 01:01:03.836495 | orchestrator | Sunday 23 November 2025 00:59:48 +0000 (0:00:01.800) 0:00:48.815 ******* 2025-11-23 01:01:03.836514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.836527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836561 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:01:03.836573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.836584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836607 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:01:03.836625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.836637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.836671 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:01:03.836682 | orchestrator | 2025-11-23 01:01:03.836693 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-11-23 01:01:03.836704 | orchestrator | Sunday 23 November 2025 00:59:49 +0000 (0:00:00.699) 0:00:49.515 ******* 2025-11-23 01:01:03.836716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.836995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837114 | orchestrator | 2025-11-23 01:01:03.837125 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-11-23 01:01:03.837143 | orchestrator | Sunday 23 November 2025 00:59:52 +0000 (0:00:03.322) 0:00:52.838 ******* 2025-11-23 01:01:03.837154 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:01:03.837165 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:01:03.837176 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:01:03.837187 | orchestrator | 2025-11-23 01:01:03.837198 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-11-23 01:01:03.837208 | orchestrator | Sunday 23 November 2025 00:59:54 +0000 (0:00:02.298) 0:00:55.136 ******* 2025-11-23 01:01:03.837219 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 01:01:03.837230 | orchestrator | 2025-11-23 01:01:03.837240 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-11-23 01:01:03.837251 | orchestrator | Sunday 23 November 2025 00:59:56 +0000 (0:00:01.869) 0:00:57.006 ******* 2025-11-23 01:01:03.837261 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:01:03.837272 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:01:03.837283 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:01:03.837352 | orchestrator | 2025-11-23 01:01:03.837365 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-11-23 01:01:03.837376 | orchestrator | Sunday 23 November 2025 00:59:57 +0000 (0:00:00.708) 0:00:57.714 ******* 2025-11-23 01:01:03.837392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837521 | orchestrator | 2025-11-23 01:01:03.837532 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-11-23 01:01:03.837550 | orchestrator | Sunday 23 November 2025 01:00:05 +0000 (0:00:08.466) 0:01:06.180 ******* 2025-11-23 01:01:03.837568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.837580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.837596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.837610 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:01:03.837624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.837637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.837664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.837678 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:01:03.837691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-23 01:01:03.837710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.837724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:01:03.837736 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:01:03.837749 | orchestrator | 2025-11-23 01:01:03.837762 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-11-23 01:01:03.837775 | orchestrator | Sunday 23 November 2025 01:00:06 +0000 (0:00:01.078) 0:01:07.259 ******* 2025-11-23 01:01:03.837789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-23 01:01:03.837848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:01:03.837945 | orchestrator | 2025-11-23 01:01:03.837958 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-23 01:01:03.837970 | orchestrator | Sunday 23 November 2025 01:00:10 +0000 (0:00:03.323) 0:01:10.583 ******* 2025-11-23 01:01:03.837981 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:01:03.837993 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:01:03.838003 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:01:03.838070 | orchestrator | 2025-11-23 01:01:03.838085 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-11-23 01:01:03.838096 | orchestrator | Sunday 23 November 2025 01:00:10 +0000 (0:00:00.375) 0:01:10.961 ******* 2025-11-23 01:01:03.838107 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:01:03.838118 | orchestrator | 2025-11-23 01:01:03.838128 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-11-23 01:01:03.838139 | orchestrator | Sunday 23 November 2025 01:00:13 +0000 (0:00:02.621) 0:01:13.583 ******* 2025-11-23 01:01:03.838150 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:01:03.838161 | orchestrator | 2025-11-23 01:01:03.838171 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-11-23 01:01:03.838182 | orchestrator | Sunday 23 November 2025 01:00:15 +0000 (0:00:02.562) 0:01:16.146 ******* 2025-11-23 01:01:03.838198 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:01:03.838209 | orchestrator | 2025-11-23 01:01:03.838220 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-23 01:01:03.838231 | orchestrator | Sunday 23 November 2025 01:00:28 +0000 (0:00:12.621) 0:01:28.768 ******* 2025-11-23 01:01:03.838242 | orchestrator | 2025-11-23 01:01:03.838252 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-23 01:01:03.838263 | orchestrator | Sunday 23 November 2025 01:00:28 +0000 (0:00:00.196) 0:01:28.964 ******* 2025-11-23 01:01:03.838274 | orchestrator | 2025-11-23 01:01:03.838285 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-23 01:01:03.838317 | orchestrator | Sunday 23 November 2025 01:00:28 +0000 (0:00:00.204) 0:01:29.169 ******* 2025-11-23 01:01:03.838329 | orchestrator | 2025-11-23 01:01:03.838340 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-11-23 01:01:03.838359 | orchestrator | Sunday 23 November 2025 01:00:28 +0000 (0:00:00.125) 0:01:29.295 ******* 2025-11-23 01:01:03.838370 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:01:03.838381 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:01:03.838392 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:01:03.838403 | orchestrator | 2025-11-23 01:01:03.838414 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-11-23 01:01:03.838425 | orchestrator | Sunday 23 November 2025 01:00:41 +0000 (0:00:12.610) 0:01:41.906 ******* 2025-11-23 01:01:03.838436 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:01:03.838447 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:01:03.838458 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:01:03.838469 | orchestrator | 2025-11-23 01:01:03.838480 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-11-23 01:01:03.838491 | orchestrator | Sunday 23 November 2025 01:00:53 +0000 (0:00:11.562) 0:01:53.468 ******* 2025-11-23 01:01:03.838502 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:01:03.838513 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:01:03.838523 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:01:03.838534 | orchestrator | 2025-11-23 01:01:03.838545 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:01:03.838558 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 01:01:03.838571 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 01:01:03.838582 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 01:01:03.838593 | orchestrator | 2025-11-23 01:01:03.838604 | orchestrator | 2025-11-23 01:01:03.838615 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:01:03.838626 | orchestrator | Sunday 23 November 2025 01:01:01 +0000 (0:00:07.979) 0:02:01.448 ******* 2025-11-23 01:01:03.838637 | orchestrator | =============================================================================== 2025-11-23 01:01:03.838648 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.52s 2025-11-23 01:01:03.838665 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.62s 2025-11-23 01:01:03.838677 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.61s 2025-11-23 01:01:03.838687 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.56s 2025-11-23 01:01:03.838699 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.47s 2025-11-23 01:01:03.838710 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.98s 2025-11-23 01:01:03.838720 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.75s 2025-11-23 01:01:03.838731 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.14s 2025-11-23 01:01:03.838742 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.75s 2025-11-23 01:01:03.838752 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.63s 2025-11-23 01:01:03.838763 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.63s 2025-11-23 01:01:03.838774 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.33s 2025-11-23 01:01:03.838785 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.32s 2025-11-23 01:01:03.838795 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.21s 2025-11-23 01:01:03.838806 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.62s 2025-11-23 01:01:03.838817 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.56s 2025-11-23 01:01:03.838827 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.30s 2025-11-23 01:01:03.838845 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.14s 2025-11-23 01:01:03.838856 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.87s 2025-11-23 01:01:03.838867 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.80s 2025-11-23 01:01:03.838878 | orchestrator | 2025-11-23 01:01:03 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:03.838890 | orchestrator | 2025-11-23 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:06.865836 | orchestrator | 2025-11-23 01:01:06 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:06.866132 | orchestrator | 2025-11-23 01:01:06 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:06.866935 | orchestrator | 2025-11-23 01:01:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:06.867640 | orchestrator | 2025-11-23 01:01:06 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:06.867792 | orchestrator | 2025-11-23 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:09.897081 | orchestrator | 2025-11-23 01:01:09 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:09.897500 | orchestrator | 2025-11-23 01:01:09 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:09.898753 | orchestrator | 2025-11-23 01:01:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:09.899516 | orchestrator | 2025-11-23 01:01:09 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:09.899571 | orchestrator | 2025-11-23 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:12.933157 | orchestrator | 2025-11-23 01:01:12 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:12.933253 | orchestrator | 2025-11-23 01:01:12 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:12.933613 | orchestrator | 2025-11-23 01:01:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:12.935096 | orchestrator | 2025-11-23 01:01:12 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:12.935191 | orchestrator | 2025-11-23 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:15.964537 | orchestrator | 2025-11-23 01:01:15 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:15.965902 | orchestrator | 2025-11-23 01:01:15 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:15.967873 | orchestrator | 2025-11-23 01:01:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:15.968637 | orchestrator | 2025-11-23 01:01:15 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:15.968783 | orchestrator | 2025-11-23 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:19.011923 | orchestrator | 2025-11-23 01:01:19 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:19.012615 | orchestrator | 2025-11-23 01:01:19 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:19.014252 | orchestrator | 2025-11-23 01:01:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:19.016052 | orchestrator | 2025-11-23 01:01:19 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:19.016358 | orchestrator | 2025-11-23 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:22.055656 | orchestrator | 2025-11-23 01:01:22 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:22.056070 | orchestrator | 2025-11-23 01:01:22 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:22.057000 | orchestrator | 2025-11-23 01:01:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:22.057927 | orchestrator | 2025-11-23 01:01:22 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:22.058009 | orchestrator | 2025-11-23 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:25.090231 | orchestrator | 2025-11-23 01:01:25 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:25.090981 | orchestrator | 2025-11-23 01:01:25 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:25.091016 | orchestrator | 2025-11-23 01:01:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:25.091668 | orchestrator | 2025-11-23 01:01:25 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:25.091691 | orchestrator | 2025-11-23 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:28.124109 | orchestrator | 2025-11-23 01:01:28 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:28.125410 | orchestrator | 2025-11-23 01:01:28 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:28.125923 | orchestrator | 2025-11-23 01:01:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:28.127237 | orchestrator | 2025-11-23 01:01:28 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:28.127283 | orchestrator | 2025-11-23 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:31.151418 | orchestrator | 2025-11-23 01:01:31 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:31.151578 | orchestrator | 2025-11-23 01:01:31 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:31.152418 | orchestrator | 2025-11-23 01:01:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:31.154136 | orchestrator | 2025-11-23 01:01:31 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:31.154189 | orchestrator | 2025-11-23 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:34.191582 | orchestrator | 2025-11-23 01:01:34 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:34.193017 | orchestrator | 2025-11-23 01:01:34 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:34.194492 | orchestrator | 2025-11-23 01:01:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:34.197378 | orchestrator | 2025-11-23 01:01:34 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:34.197411 | orchestrator | 2025-11-23 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:37.231151 | orchestrator | 2025-11-23 01:01:37 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:37.231707 | orchestrator | 2025-11-23 01:01:37 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:37.232020 | orchestrator | 2025-11-23 01:01:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:37.234538 | orchestrator | 2025-11-23 01:01:37 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:37.234621 | orchestrator | 2025-11-23 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:40.259752 | orchestrator | 2025-11-23 01:01:40 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:40.260085 | orchestrator | 2025-11-23 01:01:40 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:40.260702 | orchestrator | 2025-11-23 01:01:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:40.261309 | orchestrator | 2025-11-23 01:01:40 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:40.261335 | orchestrator | 2025-11-23 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:43.301483 | orchestrator | 2025-11-23 01:01:43 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:43.303417 | orchestrator | 2025-11-23 01:01:43 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:43.304052 | orchestrator | 2025-11-23 01:01:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:43.304874 | orchestrator | 2025-11-23 01:01:43 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:43.304893 | orchestrator | 2025-11-23 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:46.341659 | orchestrator | 2025-11-23 01:01:46 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:46.341768 | orchestrator | 2025-11-23 01:01:46 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:46.342556 | orchestrator | 2025-11-23 01:01:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:46.343315 | orchestrator | 2025-11-23 01:01:46 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:46.343371 | orchestrator | 2025-11-23 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:49.380957 | orchestrator | 2025-11-23 01:01:49 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:49.381560 | orchestrator | 2025-11-23 01:01:49 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:49.382128 | orchestrator | 2025-11-23 01:01:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:49.383051 | orchestrator | 2025-11-23 01:01:49 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:49.383130 | orchestrator | 2025-11-23 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:52.420448 | orchestrator | 2025-11-23 01:01:52 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:52.422504 | orchestrator | 2025-11-23 01:01:52 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:52.423343 | orchestrator | 2025-11-23 01:01:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:52.424319 | orchestrator | 2025-11-23 01:01:52 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:52.425356 | orchestrator | 2025-11-23 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:55.467016 | orchestrator | 2025-11-23 01:01:55 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:55.467661 | orchestrator | 2025-11-23 01:01:55 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:55.469191 | orchestrator | 2025-11-23 01:01:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:55.469966 | orchestrator | 2025-11-23 01:01:55 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:55.469998 | orchestrator | 2025-11-23 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:01:58.505498 | orchestrator | 2025-11-23 01:01:58 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:01:58.508154 | orchestrator | 2025-11-23 01:01:58 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:01:58.512211 | orchestrator | 2025-11-23 01:01:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:01:58.514919 | orchestrator | 2025-11-23 01:01:58 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:01:58.515016 | orchestrator | 2025-11-23 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:01.544275 | orchestrator | 2025-11-23 01:02:01 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:01.544422 | orchestrator | 2025-11-23 01:02:01 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:01.544725 | orchestrator | 2025-11-23 01:02:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:01.546347 | orchestrator | 2025-11-23 01:02:01 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:01.546373 | orchestrator | 2025-11-23 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:04.592453 | orchestrator | 2025-11-23 01:02:04 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:04.593868 | orchestrator | 2025-11-23 01:02:04 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:04.595848 | orchestrator | 2025-11-23 01:02:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:04.597417 | orchestrator | 2025-11-23 01:02:04 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:04.597466 | orchestrator | 2025-11-23 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:07.634552 | orchestrator | 2025-11-23 01:02:07 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:07.635017 | orchestrator | 2025-11-23 01:02:07 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:07.638395 | orchestrator | 2025-11-23 01:02:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:07.638848 | orchestrator | 2025-11-23 01:02:07 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:07.638880 | orchestrator | 2025-11-23 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:10.661886 | orchestrator | 2025-11-23 01:02:10 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:10.662203 | orchestrator | 2025-11-23 01:02:10 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:10.662991 | orchestrator | 2025-11-23 01:02:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:10.663759 | orchestrator | 2025-11-23 01:02:10 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:10.663805 | orchestrator | 2025-11-23 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:13.686719 | orchestrator | 2025-11-23 01:02:13 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:13.686940 | orchestrator | 2025-11-23 01:02:13 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:13.687593 | orchestrator | 2025-11-23 01:02:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:13.688042 | orchestrator | 2025-11-23 01:02:13 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:13.688075 | orchestrator | 2025-11-23 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:16.726342 | orchestrator | 2025-11-23 01:02:16 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:16.728761 | orchestrator | 2025-11-23 01:02:16 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:16.731994 | orchestrator | 2025-11-23 01:02:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:16.734221 | orchestrator | 2025-11-23 01:02:16 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:16.734920 | orchestrator | 2025-11-23 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:19.764072 | orchestrator | 2025-11-23 01:02:19 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:19.765157 | orchestrator | 2025-11-23 01:02:19 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:19.766568 | orchestrator | 2025-11-23 01:02:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:19.767759 | orchestrator | 2025-11-23 01:02:19 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:19.768156 | orchestrator | 2025-11-23 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:22.812540 | orchestrator | 2025-11-23 01:02:22 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:22.814271 | orchestrator | 2025-11-23 01:02:22 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:22.816207 | orchestrator | 2025-11-23 01:02:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:22.817736 | orchestrator | 2025-11-23 01:02:22 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:22.817943 | orchestrator | 2025-11-23 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:25.857866 | orchestrator | 2025-11-23 01:02:25 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:25.860052 | orchestrator | 2025-11-23 01:02:25 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:25.861946 | orchestrator | 2025-11-23 01:02:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:25.863858 | orchestrator | 2025-11-23 01:02:25 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:25.863977 | orchestrator | 2025-11-23 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:28.905892 | orchestrator | 2025-11-23 01:02:28 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:28.906753 | orchestrator | 2025-11-23 01:02:28 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:28.907737 | orchestrator | 2025-11-23 01:02:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:28.908192 | orchestrator | 2025-11-23 01:02:28 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:28.908435 | orchestrator | 2025-11-23 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:31.940993 | orchestrator | 2025-11-23 01:02:31 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:31.941903 | orchestrator | 2025-11-23 01:02:31 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:31.942895 | orchestrator | 2025-11-23 01:02:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:31.943587 | orchestrator | 2025-11-23 01:02:31 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state STARTED 2025-11-23 01:02:31.943665 | orchestrator | 2025-11-23 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:34.975577 | orchestrator | 2025-11-23 01:02:34 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:34.977243 | orchestrator | 2025-11-23 01:02:34 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:34.979112 | orchestrator | 2025-11-23 01:02:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:34.982070 | orchestrator | 2025-11-23 01:02:34 | INFO  | Task 00c3be22-eab9-462e-ba28-701e898e72ff is in state SUCCESS 2025-11-23 01:02:34.983861 | orchestrator | 2025-11-23 01:02:34.983895 | orchestrator | 2025-11-23 01:02:34.983902 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:02:34.983911 | orchestrator | 2025-11-23 01:02:34.983917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:02:34.983924 | orchestrator | Sunday 23 November 2025 00:58:50 +0000 (0:00:00.225) 0:00:00.225 ******* 2025-11-23 01:02:34.983931 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:02:34.983939 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:02:34.983946 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:02:34.983953 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:02:34.983959 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:02:34.983965 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:02:34.983972 | orchestrator | 2025-11-23 01:02:34.983978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:02:34.983984 | orchestrator | Sunday 23 November 2025 00:58:50 +0000 (0:00:00.619) 0:00:00.845 ******* 2025-11-23 01:02:34.983991 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-11-23 01:02:34.983998 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-11-23 01:02:34.984005 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-11-23 01:02:34.984010 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-11-23 01:02:34.984017 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-11-23 01:02:34.984024 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-11-23 01:02:34.984031 | orchestrator | 2025-11-23 01:02:34.984037 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-11-23 01:02:34.984043 | orchestrator | 2025-11-23 01:02:34.984050 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-23 01:02:34.984056 | orchestrator | Sunday 23 November 2025 00:58:51 +0000 (0:00:00.534) 0:00:01.380 ******* 2025-11-23 01:02:34.984063 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 01:02:34.984072 | orchestrator | 2025-11-23 01:02:34.984078 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-11-23 01:02:34.984084 | orchestrator | Sunday 23 November 2025 00:58:52 +0000 (0:00:01.001) 0:00:02.381 ******* 2025-11-23 01:02:34.984113 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:02:34.984121 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:02:34.984127 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:02:34.984133 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:02:34.984140 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:02:34.984146 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:02:34.984152 | orchestrator | 2025-11-23 01:02:34.984158 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-11-23 01:02:34.984165 | orchestrator | Sunday 23 November 2025 00:58:53 +0000 (0:00:01.208) 0:00:03.590 ******* 2025-11-23 01:02:34.984171 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:02:34.984177 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:02:34.984183 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:02:34.984189 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:02:34.984196 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:02:34.984251 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:02:34.984258 | orchestrator | 2025-11-23 01:02:34.984265 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-11-23 01:02:34.984272 | orchestrator | Sunday 23 November 2025 00:58:54 +0000 (0:00:00.996) 0:00:04.587 ******* 2025-11-23 01:02:34.984279 | orchestrator | ok: [testbed-node-0] => { 2025-11-23 01:02:34.984286 | orchestrator |  "changed": false, 2025-11-23 01:02:34.984304 | orchestrator |  "msg": "All assertions passed" 2025-11-23 01:02:34.984311 | orchestrator | } 2025-11-23 01:02:34.984318 | orchestrator | ok: [testbed-node-1] => { 2025-11-23 01:02:34.984324 | orchestrator |  "changed": false, 2025-11-23 01:02:34.984349 | orchestrator |  "msg": "All assertions passed" 2025-11-23 01:02:34.984356 | orchestrator | } 2025-11-23 01:02:34.984363 | orchestrator | ok: [testbed-node-2] => { 2025-11-23 01:02:34.984369 | orchestrator |  "changed": false, 2025-11-23 01:02:34.984376 | orchestrator |  "msg": "All assertions passed" 2025-11-23 01:02:34.984383 | orchestrator | } 2025-11-23 01:02:34.984389 | orchestrator | ok: [testbed-node-3] => { 2025-11-23 01:02:34.984395 | orchestrator |  "changed": false, 2025-11-23 01:02:34.984401 | orchestrator |  "msg": "All assertions passed" 2025-11-23 01:02:34.984407 | orchestrator | } 2025-11-23 01:02:34.984414 | orchestrator | ok: [testbed-node-4] => { 2025-11-23 01:02:34.984420 | orchestrator |  "changed": false, 2025-11-23 01:02:34.984427 | orchestrator |  "msg": "All assertions passed" 2025-11-23 01:02:34.984435 | orchestrator | } 2025-11-23 01:02:34.984442 | orchestrator | ok: [testbed-node-5] => { 2025-11-23 01:02:34.984449 | orchestrator |  "changed": false, 2025-11-23 01:02:34.984457 | orchestrator |  "msg": "All assertions passed" 2025-11-23 01:02:34.984464 | orchestrator | } 2025-11-23 01:02:34.984471 | orchestrator | 2025-11-23 01:02:34.984478 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-11-23 01:02:34.984484 | orchestrator | Sunday 23 November 2025 00:58:55 +0000 (0:00:00.685) 0:00:05.273 ******* 2025-11-23 01:02:34.984491 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.984498 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.984505 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.984512 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.984520 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.984527 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.984535 | orchestrator | 2025-11-23 01:02:34.984558 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-11-23 01:02:34.984564 | orchestrator | Sunday 23 November 2025 00:58:55 +0000 (0:00:00.553) 0:00:05.826 ******* 2025-11-23 01:02:34.984570 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-11-23 01:02:34.984577 | orchestrator | 2025-11-23 01:02:34.984583 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-11-23 01:02:34.984589 | orchestrator | Sunday 23 November 2025 00:58:59 +0000 (0:00:03.626) 0:00:09.453 ******* 2025-11-23 01:02:34.984596 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-11-23 01:02:34.984614 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-11-23 01:02:34.984621 | orchestrator | 2025-11-23 01:02:34.984640 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-11-23 01:02:34.984647 | orchestrator | Sunday 23 November 2025 00:59:06 +0000 (0:00:07.077) 0:00:16.530 ******* 2025-11-23 01:02:34.984699 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:02:34.984706 | orchestrator | 2025-11-23 01:02:34.984713 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-11-23 01:02:34.984720 | orchestrator | Sunday 23 November 2025 00:59:09 +0000 (0:00:03.495) 0:00:20.025 ******* 2025-11-23 01:02:34.984728 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:02:34.984735 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-11-23 01:02:34.984742 | orchestrator | 2025-11-23 01:02:34.984749 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-11-23 01:02:34.984756 | orchestrator | Sunday 23 November 2025 00:59:14 +0000 (0:00:04.098) 0:00:24.123 ******* 2025-11-23 01:02:34.984763 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:02:34.984770 | orchestrator | 2025-11-23 01:02:34.984776 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-11-23 01:02:34.984783 | orchestrator | Sunday 23 November 2025 00:59:17 +0000 (0:00:03.497) 0:00:27.621 ******* 2025-11-23 01:02:34.984790 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-11-23 01:02:34.984797 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-11-23 01:02:34.984804 | orchestrator | 2025-11-23 01:02:34.984810 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-23 01:02:34.984818 | orchestrator | Sunday 23 November 2025 00:59:25 +0000 (0:00:07.578) 0:00:35.199 ******* 2025-11-23 01:02:34.984825 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.984833 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.984840 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.984847 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.984854 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.984860 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.984868 | orchestrator | 2025-11-23 01:02:34.984875 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-11-23 01:02:34.984882 | orchestrator | Sunday 23 November 2025 00:59:25 +0000 (0:00:00.643) 0:00:35.843 ******* 2025-11-23 01:02:34.984890 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.984898 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.984906 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.984913 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.984919 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.984927 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.984934 | orchestrator | 2025-11-23 01:02:34.984942 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-11-23 01:02:34.984949 | orchestrator | Sunday 23 November 2025 00:59:27 +0000 (0:00:02.000) 0:00:37.844 ******* 2025-11-23 01:02:34.984956 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:02:34.984963 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:02:34.984969 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:02:34.984976 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:02:34.984985 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:02:34.984991 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:02:34.984997 | orchestrator | 2025-11-23 01:02:34.985004 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-23 01:02:34.985010 | orchestrator | Sunday 23 November 2025 00:59:28 +0000 (0:00:00.781) 0:00:38.625 ******* 2025-11-23 01:02:34.985017 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.985023 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.985029 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.985036 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.985054 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.985061 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.985106 | orchestrator | 2025-11-23 01:02:34.985114 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-11-23 01:02:34.985120 | orchestrator | Sunday 23 November 2025 00:59:30 +0000 (0:00:01.651) 0:00:40.276 ******* 2025-11-23 01:02:34.985136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.985156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.985164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.985171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.985179 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.985196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.985203 | orchestrator | 2025-11-23 01:02:34.985210 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-11-23 01:02:34.985216 | orchestrator | Sunday 23 November 2025 00:59:32 +0000 (0:00:02.329) 0:00:42.605 ******* 2025-11-23 01:02:34.985222 | orchestrator | [WARNING]: Skipped 2025-11-23 01:02:34.985229 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-11-23 01:02:34.985236 | orchestrator | due to this access issue: 2025-11-23 01:02:34.985242 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-11-23 01:02:34.985249 | orchestrator | a directory 2025-11-23 01:02:34.985255 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 01:02:34.985261 | orchestrator | 2025-11-23 01:02:34.985268 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-23 01:02:34.985278 | orchestrator | Sunday 23 November 2025 00:59:33 +0000 (0:00:00.758) 0:00:43.364 ******* 2025-11-23 01:02:34.985285 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 01:02:34.985312 | orchestrator | 2025-11-23 01:02:34.985318 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-11-23 01:02:34.985324 | orchestrator | Sunday 23 November 2025 00:59:34 +0000 (0:00:01.078) 0:00:44.443 ******* 2025-11-23 01:02:34.985330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.985337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.985350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.985360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.985372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.985379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.985385 | orchestrator | 2025-11-23 01:02:34.985392 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-11-23 01:02:34.985399 | orchestrator | Sunday 23 November 2025 00:59:37 +0000 (0:00:02.686) 0:00:47.129 ******* 2025-11-23 01:02:34.985405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985416 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.985423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985430 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.985439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.985445 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.985456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.985462 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.985468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985487 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.985493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.985500 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.985507 | orchestrator | 2025-11-23 01:02:34.985513 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-11-23 01:02:34.985520 | orchestrator | Sunday 23 November 2025 00:59:39 +0000 (0:00:02.511) 0:00:49.640 ******* 2025-11-23 01:02:34.985526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985536 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.985547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985553 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.985558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.985569 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.985625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985633 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.985660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.985666 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.985677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.985683 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.985690 | orchestrator | 2025-11-23 01:02:34.985695 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-11-23 01:02:34.985702 | orchestrator | Sunday 23 November 2025 00:59:41 +0000 (0:00:02.441) 0:00:52.081 ******* 2025-11-23 01:02:34.985709 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.985715 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.985721 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.985727 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.985733 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.985739 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.985745 | orchestrator | 2025-11-23 01:02:34.985752 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-11-23 01:02:34.985763 | orchestrator | Sunday 23 November 2025 00:59:43 +0000 (0:00:01.917) 0:00:53.999 ******* 2025-11-23 01:02:34.985769 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.985775 | orchestrator | 2025-11-23 01:02:34.985781 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-11-23 01:02:34.985794 | orchestrator | Sunday 23 November 2025 00:59:43 +0000 (0:00:00.105) 0:00:54.104 ******* 2025-11-23 01:02:34.985800 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.985806 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.985812 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.985818 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.985824 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.985831 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.985837 | orchestrator | 2025-11-23 01:02:34.985843 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-11-23 01:02:34.985849 | orchestrator | Sunday 23 November 2025 00:59:44 +0000 (0:00:00.675) 0:00:54.780 ******* 2025-11-23 01:02:34.985856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985862 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.985869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985876 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.985883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.985889 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.986257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986320 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.986330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986337 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.986345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986353 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.986362 | orchestrator | 2025-11-23 01:02:34.986369 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-11-23 01:02:34.986376 | orchestrator | Sunday 23 November 2025 00:59:46 +0000 (0:00:01.700) 0:00:56.480 ******* 2025-11-23 01:02:34.986383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.986434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.986448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.986455 | orchestrator | 2025-11-23 01:02:34.986461 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-11-23 01:02:34.986467 | orchestrator | Sunday 23 November 2025 00:59:49 +0000 (0:00:03.440) 0:00:59.920 ******* 2025-11-23 01:02:34.986478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.986516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.986525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.986536 | orchestrator | 2025-11-23 01:02:34.986543 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-11-23 01:02:34.986550 | orchestrator | Sunday 23 November 2025 00:59:54 +0000 (0:00:05.083) 0:01:05.004 ******* 2025-11-23 01:02:34.986561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.986568 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.986574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.986580 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.986586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.986592 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.986598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986608 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.986618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986625 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.986634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986641 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.986647 | orchestrator | 2025-11-23 01:02:34.986653 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-11-23 01:02:34.986659 | orchestrator | Sunday 23 November 2025 00:59:57 +0000 (0:00:02.645) 0:01:07.649 ******* 2025-11-23 01:02:34.986665 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:02:34.986670 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.986677 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.986683 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.986689 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:02:34.986717 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:02:34.986724 | orchestrator | 2025-11-23 01:02:34.986730 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-11-23 01:02:34.986736 | orchestrator | Sunday 23 November 2025 01:00:00 +0000 (0:00:03.401) 0:01:11.051 ******* 2025-11-23 01:02:34.986742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986749 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.986755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986769 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.986779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.986786 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.986798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.986823 | orchestrator | 2025-11-23 01:02:34.986829 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-11-23 01:02:34.986835 | orchestrator | Sunday 23 November 2025 01:00:04 +0000 (0:00:04.044) 0:01:15.096 ******* 2025-11-23 01:02:34.986841 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.986847 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.986854 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.986860 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.986867 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.986873 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.986879 | orchestrator | 2025-11-23 01:02:34.986886 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-11-23 01:02:34.986893 | orchestrator | Sunday 23 November 2025 01:00:07 +0000 (0:00:02.303) 0:01:17.399 ******* 2025-11-23 01:02:34.986899 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.986905 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.986912 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.986918 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.986924 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.986931 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.986937 | orchestrator | 2025-11-23 01:02:34.986943 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-11-23 01:02:34.986950 | orchestrator | Sunday 23 November 2025 01:00:09 +0000 (0:00:02.638) 0:01:20.038 ******* 2025-11-23 01:02:34.986956 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.986962 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.986969 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.986975 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.986981 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.986987 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.986994 | orchestrator | 2025-11-23 01:02:34.987003 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-11-23 01:02:34.987009 | orchestrator | Sunday 23 November 2025 01:00:12 +0000 (0:00:02.081) 0:01:22.119 ******* 2025-11-23 01:02:34.987016 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987022 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987028 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987035 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987041 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987047 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987054 | orchestrator | 2025-11-23 01:02:34.987060 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-11-23 01:02:34.987066 | orchestrator | Sunday 23 November 2025 01:00:14 +0000 (0:00:02.129) 0:01:24.248 ******* 2025-11-23 01:02:34.987072 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987079 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987085 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987091 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987100 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987105 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987109 | orchestrator | 2025-11-23 01:02:34.987113 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-11-23 01:02:34.987117 | orchestrator | Sunday 23 November 2025 01:00:16 +0000 (0:00:02.053) 0:01:26.302 ******* 2025-11-23 01:02:34.987121 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987124 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987128 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987132 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987135 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987139 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987146 | orchestrator | 2025-11-23 01:02:34.987150 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-11-23 01:02:34.987154 | orchestrator | Sunday 23 November 2025 01:00:19 +0000 (0:00:03.308) 0:01:29.610 ******* 2025-11-23 01:02:34.987158 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-23 01:02:34.987162 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987165 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-23 01:02:34.987169 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987173 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-23 01:02:34.987177 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987180 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-23 01:02:34.987184 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987188 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-23 01:02:34.987192 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987195 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-23 01:02:34.987199 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987203 | orchestrator | 2025-11-23 01:02:34.987206 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-11-23 01:02:34.987210 | orchestrator | Sunday 23 November 2025 01:00:21 +0000 (0:00:01.967) 0:01:31.577 ******* 2025-11-23 01:02:34.987214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987218 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987228 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987242 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.987250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.987254 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987258 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.987265 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987269 | orchestrator | 2025-11-23 01:02:34.987273 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-11-23 01:02:34.987276 | orchestrator | Sunday 23 November 2025 01:00:23 +0000 (0:00:02.289) 0:01:33.867 ******* 2025-11-23 01:02:34.987282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987289 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987337 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987345 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.987353 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.987360 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.987374 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987378 | orchestrator | 2025-11-23 01:02:34.987381 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-11-23 01:02:34.987385 | orchestrator | Sunday 23 November 2025 01:00:25 +0000 (0:00:02.212) 0:01:36.079 ******* 2025-11-23 01:02:34.987389 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987395 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987398 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987402 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987406 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987409 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987413 | orchestrator | 2025-11-23 01:02:34.987417 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-11-23 01:02:34.987420 | orchestrator | Sunday 23 November 2025 01:00:27 +0000 (0:00:01.801) 0:01:37.881 ******* 2025-11-23 01:02:34.987424 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987428 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987432 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987435 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:02:34.987439 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:02:34.987442 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:02:34.987446 | orchestrator | 2025-11-23 01:02:34.987450 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-11-23 01:02:34.987453 | orchestrator | Sunday 23 November 2025 01:00:32 +0000 (0:00:04.381) 0:01:42.262 ******* 2025-11-23 01:02:34.987457 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987461 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987464 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987468 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987474 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987480 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987486 | orchestrator | 2025-11-23 01:02:34.987492 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-11-23 01:02:34.987498 | orchestrator | Sunday 23 November 2025 01:00:34 +0000 (0:00:01.950) 0:01:44.213 ******* 2025-11-23 01:02:34.987504 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987510 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987516 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987523 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987529 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987536 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987542 | orchestrator | 2025-11-23 01:02:34.987548 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-11-23 01:02:34.987555 | orchestrator | Sunday 23 November 2025 01:00:36 +0000 (0:00:02.300) 0:01:46.513 ******* 2025-11-23 01:02:34.987561 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987567 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987573 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987579 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987586 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987593 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987600 | orchestrator | 2025-11-23 01:02:34.987607 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-11-23 01:02:34.987614 | orchestrator | Sunday 23 November 2025 01:00:38 +0000 (0:00:01.732) 0:01:48.245 ******* 2025-11-23 01:02:34.987626 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987633 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987639 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987646 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987653 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987659 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987666 | orchestrator | 2025-11-23 01:02:34.987673 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-11-23 01:02:34.987679 | orchestrator | Sunday 23 November 2025 01:00:40 +0000 (0:00:01.946) 0:01:50.191 ******* 2025-11-23 01:02:34.987686 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987692 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987699 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987706 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987712 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987719 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987726 | orchestrator | 2025-11-23 01:02:34.987733 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-11-23 01:02:34.987739 | orchestrator | Sunday 23 November 2025 01:00:43 +0000 (0:00:03.345) 0:01:53.537 ******* 2025-11-23 01:02:34.987745 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987752 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987757 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987763 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987769 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987776 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987782 | orchestrator | 2025-11-23 01:02:34.987788 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-11-23 01:02:34.987795 | orchestrator | Sunday 23 November 2025 01:00:45 +0000 (0:00:02.303) 0:01:55.840 ******* 2025-11-23 01:02:34.987801 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987807 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987813 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987819 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987825 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987831 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987837 | orchestrator | 2025-11-23 01:02:34.987843 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-11-23 01:02:34.987854 | orchestrator | Sunday 23 November 2025 01:00:47 +0000 (0:00:01.880) 0:01:57.721 ******* 2025-11-23 01:02:34.987862 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-23 01:02:34.987870 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.987876 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-23 01:02:34.987883 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.987889 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-23 01:02:34.987895 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987902 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-23 01:02:34.987909 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.987923 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-23 01:02:34.987931 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.987938 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-23 01:02:34.987944 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.987951 | orchestrator | 2025-11-23 01:02:34.987959 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-11-23 01:02:34.987965 | orchestrator | Sunday 23 November 2025 01:00:49 +0000 (0:00:01.803) 0:01:59.525 ******* 2025-11-23 01:02:34.987978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987985 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.987991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.987997 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.988004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-23 01:02:34.988011 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.988022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.988028 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.988039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.988053 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.988059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-23 01:02:34.988066 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.988072 | orchestrator | 2025-11-23 01:02:34.988079 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-11-23 01:02:34.988086 | orchestrator | Sunday 23 November 2025 01:00:51 +0000 (0:00:01.684) 0:02:01.209 ******* 2025-11-23 01:02:34.988092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.988099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.988112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.988125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-23 01:02:34.988131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.988138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-23 01:02:34.988144 | orchestrator | 2025-11-23 01:02:34.988150 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-23 01:02:34.988157 | orchestrator | Sunday 23 November 2025 01:00:53 +0000 (0:00:02.593) 0:02:03.803 ******* 2025-11-23 01:02:34.988163 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:02:34.988169 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:02:34.988176 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:02:34.988183 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:02:34.988189 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:02:34.988195 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:02:34.988201 | orchestrator | 2025-11-23 01:02:34.988208 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-11-23 01:02:34.988214 | orchestrator | Sunday 23 November 2025 01:00:54 +0000 (0:00:00.780) 0:02:04.583 ******* 2025-11-23 01:02:34.988221 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:02:34.988227 | orchestrator | 2025-11-23 01:02:34.988234 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-11-23 01:02:34.988240 | orchestrator | Sunday 23 November 2025 01:00:56 +0000 (0:00:02.311) 0:02:06.894 ******* 2025-11-23 01:02:34.988249 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:02:34.988257 | orchestrator | 2025-11-23 01:02:34.988265 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-11-23 01:02:34.988271 | orchestrator | Sunday 23 November 2025 01:00:59 +0000 (0:00:02.516) 0:02:09.410 ******* 2025-11-23 01:02:34.988283 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:02:34.988289 | orchestrator | 2025-11-23 01:02:34.988347 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-23 01:02:34.988354 | orchestrator | Sunday 23 November 2025 01:01:41 +0000 (0:00:42.048) 0:02:51.459 ******* 2025-11-23 01:02:34.988360 | orchestrator | 2025-11-23 01:02:34.988366 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-23 01:02:34.988371 | orchestrator | Sunday 23 November 2025 01:01:41 +0000 (0:00:00.145) 0:02:51.605 ******* 2025-11-23 01:02:34.988377 | orchestrator | 2025-11-23 01:02:34.988383 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-23 01:02:34.988389 | orchestrator | Sunday 23 November 2025 01:01:41 +0000 (0:00:00.232) 0:02:51.837 ******* 2025-11-23 01:02:34.988395 | orchestrator | 2025-11-23 01:02:34.988400 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-23 01:02:34.988406 | orchestrator | Sunday 23 November 2025 01:01:41 +0000 (0:00:00.060) 0:02:51.898 ******* 2025-11-23 01:02:34.988412 | orchestrator | 2025-11-23 01:02:34.988424 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-23 01:02:34.988429 | orchestrator | Sunday 23 November 2025 01:01:41 +0000 (0:00:00.062) 0:02:51.961 ******* 2025-11-23 01:02:34.988433 | orchestrator | 2025-11-23 01:02:34.988437 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-23 01:02:34.988441 | orchestrator | Sunday 23 November 2025 01:01:41 +0000 (0:00:00.061) 0:02:52.022 ******* 2025-11-23 01:02:34.988448 | orchestrator | 2025-11-23 01:02:34.988454 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-11-23 01:02:34.988460 | orchestrator | Sunday 23 November 2025 01:01:41 +0000 (0:00:00.061) 0:02:52.084 ******* 2025-11-23 01:02:34.988466 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:02:34.988472 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:02:34.988478 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:02:34.988484 | orchestrator | 2025-11-23 01:02:34.988490 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-11-23 01:02:34.988496 | orchestrator | Sunday 23 November 2025 01:02:07 +0000 (0:00:25.041) 0:03:17.125 ******* 2025-11-23 01:02:34.988502 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:02:34.988508 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:02:34.988514 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:02:34.988520 | orchestrator | 2025-11-23 01:02:34.988527 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:02:34.988534 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-23 01:02:34.988542 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-23 01:02:34.988549 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-23 01:02:34.988555 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-23 01:02:34.988562 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-23 01:02:34.988568 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-23 01:02:34.988574 | orchestrator | 2025-11-23 01:02:34.988581 | orchestrator | 2025-11-23 01:02:34.988587 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:02:34.988593 | orchestrator | Sunday 23 November 2025 01:02:33 +0000 (0:00:26.830) 0:03:43.956 ******* 2025-11-23 01:02:34.988606 | orchestrator | =============================================================================== 2025-11-23 01:02:34.988612 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.05s 2025-11-23 01:02:34.988619 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 26.83s 2025-11-23 01:02:34.988625 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.04s 2025-11-23 01:02:34.988631 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.58s 2025-11-23 01:02:34.988638 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.08s 2025-11-23 01:02:34.988644 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.08s 2025-11-23 01:02:34.988650 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.38s 2025-11-23 01:02:34.988656 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.10s 2025-11-23 01:02:34.988662 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.04s 2025-11-23 01:02:34.988668 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.63s 2025-11-23 01:02:34.988672 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.50s 2025-11-23 01:02:34.988676 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.50s 2025-11-23 01:02:34.988683 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.44s 2025-11-23 01:02:34.988689 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.40s 2025-11-23 01:02:34.988695 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.35s 2025-11-23 01:02:34.988701 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.31s 2025-11-23 01:02:34.988711 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.69s 2025-11-23 01:02:34.988717 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 2.65s 2025-11-23 01:02:34.988724 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 2.64s 2025-11-23 01:02:34.988730 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.59s 2025-11-23 01:02:34.988736 | orchestrator | 2025-11-23 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:38.017773 | orchestrator | 2025-11-23 01:02:38 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:38.018469 | orchestrator | 2025-11-23 01:02:38 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:38.020081 | orchestrator | 2025-11-23 01:02:38 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:38.020978 | orchestrator | 2025-11-23 01:02:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:38.021198 | orchestrator | 2025-11-23 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:41.069795 | orchestrator | 2025-11-23 01:02:41 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:41.070220 | orchestrator | 2025-11-23 01:02:41 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:41.070960 | orchestrator | 2025-11-23 01:02:41 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:41.072867 | orchestrator | 2025-11-23 01:02:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:41.072904 | orchestrator | 2025-11-23 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:44.107146 | orchestrator | 2025-11-23 01:02:44 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:44.107261 | orchestrator | 2025-11-23 01:02:44 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:44.107704 | orchestrator | 2025-11-23 01:02:44 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:44.108422 | orchestrator | 2025-11-23 01:02:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:44.108448 | orchestrator | 2025-11-23 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:47.146373 | orchestrator | 2025-11-23 01:02:47 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:47.147700 | orchestrator | 2025-11-23 01:02:47 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:47.149121 | orchestrator | 2025-11-23 01:02:47 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:47.150459 | orchestrator | 2025-11-23 01:02:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:47.150497 | orchestrator | 2025-11-23 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:50.196927 | orchestrator | 2025-11-23 01:02:50 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:50.201928 | orchestrator | 2025-11-23 01:02:50 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:50.205863 | orchestrator | 2025-11-23 01:02:50 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:50.206493 | orchestrator | 2025-11-23 01:02:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:50.206528 | orchestrator | 2025-11-23 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:53.245408 | orchestrator | 2025-11-23 01:02:53 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:53.246645 | orchestrator | 2025-11-23 01:02:53 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:53.248237 | orchestrator | 2025-11-23 01:02:53 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:53.250216 | orchestrator | 2025-11-23 01:02:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:53.250492 | orchestrator | 2025-11-23 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:56.299827 | orchestrator | 2025-11-23 01:02:56 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:56.302186 | orchestrator | 2025-11-23 01:02:56 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:56.304360 | orchestrator | 2025-11-23 01:02:56 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:56.306092 | orchestrator | 2025-11-23 01:02:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:56.306119 | orchestrator | 2025-11-23 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:02:59.344424 | orchestrator | 2025-11-23 01:02:59 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:02:59.344622 | orchestrator | 2025-11-23 01:02:59 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:02:59.345486 | orchestrator | 2025-11-23 01:02:59 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:02:59.346275 | orchestrator | 2025-11-23 01:02:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:02:59.346357 | orchestrator | 2025-11-23 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:02.390694 | orchestrator | 2025-11-23 01:03:02 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:02.392907 | orchestrator | 2025-11-23 01:03:02 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:02.395205 | orchestrator | 2025-11-23 01:03:02 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:03:02.396833 | orchestrator | 2025-11-23 01:03:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:02.396868 | orchestrator | 2025-11-23 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:05.439232 | orchestrator | 2025-11-23 01:03:05 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:05.439649 | orchestrator | 2025-11-23 01:03:05 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:05.440417 | orchestrator | 2025-11-23 01:03:05 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:03:05.442824 | orchestrator | 2025-11-23 01:03:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:05.442853 | orchestrator | 2025-11-23 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:08.478969 | orchestrator | 2025-11-23 01:03:08 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:08.480267 | orchestrator | 2025-11-23 01:03:08 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:08.480974 | orchestrator | 2025-11-23 01:03:08 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:03:08.481827 | orchestrator | 2025-11-23 01:03:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:08.481852 | orchestrator | 2025-11-23 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:11.520757 | orchestrator | 2025-11-23 01:03:11 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:11.521099 | orchestrator | 2025-11-23 01:03:11 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:11.521886 | orchestrator | 2025-11-23 01:03:11 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state STARTED 2025-11-23 01:03:11.522700 | orchestrator | 2025-11-23 01:03:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:11.522742 | orchestrator | 2025-11-23 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:14.559942 | orchestrator | 2025-11-23 01:03:14 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:14.561199 | orchestrator | 2025-11-23 01:03:14 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:14.565474 | orchestrator | 2025-11-23 01:03:14 | INFO  | Task d37ea182-309f-47e7-86e0-64cea9888e08 is in state SUCCESS 2025-11-23 01:03:14.567232 | orchestrator | 2025-11-23 01:03:14.567792 | orchestrator | 2025-11-23 01:03:14.567812 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:03:14.567827 | orchestrator | 2025-11-23 01:03:14.567841 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:03:14.567854 | orchestrator | Sunday 23 November 2025 01:00:35 +0000 (0:00:00.416) 0:00:00.416 ******* 2025-11-23 01:03:14.567866 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:03:14.567881 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:03:14.567895 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:03:14.567908 | orchestrator | 2025-11-23 01:03:14.567920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:03:14.567933 | orchestrator | Sunday 23 November 2025 01:00:35 +0000 (0:00:00.502) 0:00:00.918 ******* 2025-11-23 01:03:14.567985 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-11-23 01:03:14.567999 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-11-23 01:03:14.568012 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-11-23 01:03:14.568024 | orchestrator | 2025-11-23 01:03:14.568036 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-11-23 01:03:14.568048 | orchestrator | 2025-11-23 01:03:14.568061 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-23 01:03:14.568073 | orchestrator | Sunday 23 November 2025 01:00:36 +0000 (0:00:00.334) 0:00:01.253 ******* 2025-11-23 01:03:14.568085 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:03:14.568100 | orchestrator | 2025-11-23 01:03:14.568113 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-11-23 01:03:14.568125 | orchestrator | Sunday 23 November 2025 01:00:36 +0000 (0:00:00.456) 0:00:01.709 ******* 2025-11-23 01:03:14.568138 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-11-23 01:03:14.568837 | orchestrator | 2025-11-23 01:03:14.568861 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-11-23 01:03:14.568872 | orchestrator | Sunday 23 November 2025 01:00:40 +0000 (0:00:03.456) 0:00:05.166 ******* 2025-11-23 01:03:14.568884 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-11-23 01:03:14.568897 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-11-23 01:03:14.568908 | orchestrator | 2025-11-23 01:03:14.568918 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-11-23 01:03:14.568929 | orchestrator | Sunday 23 November 2025 01:00:46 +0000 (0:00:06.974) 0:00:12.141 ******* 2025-11-23 01:03:14.568940 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:03:14.568951 | orchestrator | 2025-11-23 01:03:14.568962 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-11-23 01:03:14.568972 | orchestrator | Sunday 23 November 2025 01:00:50 +0000 (0:00:03.412) 0:00:15.553 ******* 2025-11-23 01:03:14.568983 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:03:14.568994 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-11-23 01:03:14.569004 | orchestrator | 2025-11-23 01:03:14.569015 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-11-23 01:03:14.569025 | orchestrator | Sunday 23 November 2025 01:00:54 +0000 (0:00:04.392) 0:00:19.946 ******* 2025-11-23 01:03:14.569036 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:03:14.569046 | orchestrator | 2025-11-23 01:03:14.569057 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-11-23 01:03:14.569067 | orchestrator | Sunday 23 November 2025 01:00:58 +0000 (0:00:03.636) 0:00:23.583 ******* 2025-11-23 01:03:14.569078 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-11-23 01:03:14.569088 | orchestrator | 2025-11-23 01:03:14.569099 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-11-23 01:03:14.569109 | orchestrator | Sunday 23 November 2025 01:01:02 +0000 (0:00:04.396) 0:00:27.979 ******* 2025-11-23 01:03:14.569124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.569174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.569196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.569209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.569475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570624 | orchestrator | 2025-11-23 01:03:14.570636 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-11-23 01:03:14.570647 | orchestrator | Sunday 23 November 2025 01:01:05 +0000 (0:00:02.901) 0:00:30.880 ******* 2025-11-23 01:03:14.570658 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:14.570670 | orchestrator | 2025-11-23 01:03:14.570680 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-11-23 01:03:14.570691 | orchestrator | Sunday 23 November 2025 01:01:05 +0000 (0:00:00.141) 0:00:31.022 ******* 2025-11-23 01:03:14.570702 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:14.570713 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:14.570723 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:14.570746 | orchestrator | 2025-11-23 01:03:14.570757 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-23 01:03:14.570768 | orchestrator | Sunday 23 November 2025 01:01:06 +0000 (0:00:00.253) 0:00:31.276 ******* 2025-11-23 01:03:14.570779 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:03:14.570789 | orchestrator | 2025-11-23 01:03:14.570800 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-11-23 01:03:14.570811 | orchestrator | Sunday 23 November 2025 01:01:06 +0000 (0:00:00.585) 0:00:31.861 ******* 2025-11-23 01:03:14.570822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.570854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.570867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.570879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.570962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.571351 | orchestrator | 2025-11-23 01:03:14.571363 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-11-23 01:03:14.571643 | orchestrator | Sunday 23 November 2025 01:01:13 +0000 (0:00:06.347) 0:00:38.209 ******* 2025-11-23 01:03:14.571656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.571669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.572243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572339 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:14.572352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.572395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.572419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572491 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:14.572503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.572514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.572537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.572970 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:14.572982 | orchestrator | 2025-11-23 01:03:14.572993 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-11-23 01:03:14.573004 | orchestrator | Sunday 23 November 2025 01:01:13 +0000 (0:00:00.743) 0:00:38.952 ******* 2025-11-23 01:03:14.573015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.573027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.573049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573108 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:14.573120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.573131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.573149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573206 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:14.573218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.573229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.573240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.573441 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:14.573533 | orchestrator | 2025-11-23 01:03:14.573547 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-11-23 01:03:14.573557 | orchestrator | Sunday 23 November 2025 01:01:15 +0000 (0:00:01.209) 0:00:40.162 ******* 2025-11-23 01:03:14.573567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.573578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.573598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.573632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573817 | orchestrator | 2025-11-23 01:03:14.573826 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-11-23 01:03:14.573836 | orchestrator | Sunday 23 November 2025 01:01:21 +0000 (0:00:06.165) 0:00:46.327 ******* 2025-11-23 01:03:14.573846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.573856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.573867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.573888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.573978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574523 | orchestrator | 2025-11-23 01:03:14.574533 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-11-23 01:03:14.574543 | orchestrator | Sunday 23 November 2025 01:01:34 +0000 (0:00:13.652) 0:00:59.979 ******* 2025-11-23 01:03:14.574553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-23 01:03:14.574563 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-23 01:03:14.574572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-23 01:03:14.574582 | orchestrator | 2025-11-23 01:03:14.574591 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-11-23 01:03:14.574601 | orchestrator | Sunday 23 November 2025 01:01:38 +0000 (0:00:04.158) 0:01:04.138 ******* 2025-11-23 01:03:14.574611 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-23 01:03:14.574620 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-23 01:03:14.574630 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-23 01:03:14.574639 | orchestrator | 2025-11-23 01:03:14.574648 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-11-23 01:03:14.574658 | orchestrator | Sunday 23 November 2025 01:01:42 +0000 (0:00:03.348) 0:01:07.486 ******* 2025-11-23 01:03:14.574668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.574678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.574704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.574720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574884 | orchestrator | 2025-11-23 01:03:14.574892 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-11-23 01:03:14.574900 | orchestrator | Sunday 23 November 2025 01:01:46 +0000 (0:00:03.823) 0:01:11.309 ******* 2025-11-23 01:03:14.574909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.574917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.574931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.574944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.574985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.574994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575188 | orchestrator | 2025-11-23 01:03:14.575198 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-23 01:03:14.575207 | orchestrator | Sunday 23 November 2025 01:01:48 +0000 (0:00:02.827) 0:01:14.136 ******* 2025-11-23 01:03:14.575216 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:14.575226 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:14.575235 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:14.575244 | orchestrator | 2025-11-23 01:03:14.575253 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-11-23 01:03:14.575262 | orchestrator | Sunday 23 November 2025 01:01:49 +0000 (0:00:00.562) 0:01:14.699 ******* 2025-11-23 01:03:14.575272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.575287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.575312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.575362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575377 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:14.575387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.575397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575441 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:14.575449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-23 01:03:14.575463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-23 01:03:14.575471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:03:14.575513 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:14.575521 | orchestrator | 2025-11-23 01:03:14.575529 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-11-23 01:03:14.575537 | orchestrator | Sunday 23 November 2025 01:01:50 +0000 (0:00:00.929) 0:01:15.629 ******* 2025-11-23 01:03:14.575545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.575559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.575568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-23 01:03:14.575576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:03:14.575734 | orchestrator | 2025-11-23 01:03:14.575742 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-23 01:03:14.575750 | orchestrator | Sunday 23 November 2025 01:01:55 +0000 (0:00:05.127) 0:01:20.756 ******* 2025-11-23 01:03:14.575763 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:14.575771 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:14.575779 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:14.575787 | orchestrator | 2025-11-23 01:03:14.575795 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-11-23 01:03:14.575806 | orchestrator | Sunday 23 November 2025 01:01:55 +0000 (0:00:00.305) 0:01:21.062 ******* 2025-11-23 01:03:14.575815 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-11-23 01:03:14.575823 | orchestrator | 2025-11-23 01:03:14.575831 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-11-23 01:03:14.575839 | orchestrator | Sunday 23 November 2025 01:01:58 +0000 (0:00:02.390) 0:01:23.452 ******* 2025-11-23 01:03:14.575847 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 01:03:14.575855 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-11-23 01:03:14.575863 | orchestrator | 2025-11-23 01:03:14.575871 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-11-23 01:03:14.575879 | orchestrator | Sunday 23 November 2025 01:02:00 +0000 (0:00:02.337) 0:01:25.789 ******* 2025-11-23 01:03:14.575887 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.575895 | orchestrator | 2025-11-23 01:03:14.575903 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-23 01:03:14.575910 | orchestrator | Sunday 23 November 2025 01:02:15 +0000 (0:00:15.020) 0:01:40.810 ******* 2025-11-23 01:03:14.575918 | orchestrator | 2025-11-23 01:03:14.575926 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-23 01:03:14.575934 | orchestrator | Sunday 23 November 2025 01:02:15 +0000 (0:00:00.237) 0:01:41.048 ******* 2025-11-23 01:03:14.575942 | orchestrator | 2025-11-23 01:03:14.575950 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-23 01:03:14.575958 | orchestrator | Sunday 23 November 2025 01:02:15 +0000 (0:00:00.070) 0:01:41.118 ******* 2025-11-23 01:03:14.575966 | orchestrator | 2025-11-23 01:03:14.575974 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-11-23 01:03:14.575981 | orchestrator | Sunday 23 November 2025 01:02:16 +0000 (0:00:00.075) 0:01:41.193 ******* 2025-11-23 01:03:14.575989 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.575997 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:14.576005 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:14.576013 | orchestrator | 2025-11-23 01:03:14.576021 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-11-23 01:03:14.576029 | orchestrator | Sunday 23 November 2025 01:02:29 +0000 (0:00:13.853) 0:01:55.047 ******* 2025-11-23 01:03:14.576036 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.576044 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:14.576052 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:14.576060 | orchestrator | 2025-11-23 01:03:14.576067 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-11-23 01:03:14.576075 | orchestrator | Sunday 23 November 2025 01:02:36 +0000 (0:00:06.685) 0:02:01.732 ******* 2025-11-23 01:03:14.576083 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.576091 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:14.576099 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:14.576106 | orchestrator | 2025-11-23 01:03:14.576114 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-11-23 01:03:14.576122 | orchestrator | Sunday 23 November 2025 01:02:42 +0000 (0:00:05.829) 0:02:07.561 ******* 2025-11-23 01:03:14.576130 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.576138 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:14.576146 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:14.576154 | orchestrator | 2025-11-23 01:03:14.576161 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-11-23 01:03:14.576169 | orchestrator | Sunday 23 November 2025 01:02:52 +0000 (0:00:10.397) 0:02:17.958 ******* 2025-11-23 01:03:14.576183 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.576191 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:14.576199 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:14.576207 | orchestrator | 2025-11-23 01:03:14.576215 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-11-23 01:03:14.576223 | orchestrator | Sunday 23 November 2025 01:02:58 +0000 (0:00:05.226) 0:02:23.184 ******* 2025-11-23 01:03:14.576231 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.576239 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:14.576246 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:14.576254 | orchestrator | 2025-11-23 01:03:14.576262 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-11-23 01:03:14.576270 | orchestrator | Sunday 23 November 2025 01:03:04 +0000 (0:00:06.197) 0:02:29.382 ******* 2025-11-23 01:03:14.576278 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:14.576286 | orchestrator | 2025-11-23 01:03:14.576308 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:03:14.576316 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 01:03:14.576324 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 01:03:14.576332 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 01:03:14.576340 | orchestrator | 2025-11-23 01:03:14.576348 | orchestrator | 2025-11-23 01:03:14.576360 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:03:14.576368 | orchestrator | Sunday 23 November 2025 01:03:12 +0000 (0:00:08.110) 0:02:37.493 ******* 2025-11-23 01:03:14.576376 | orchestrator | =============================================================================== 2025-11-23 01:03:14.576384 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.02s 2025-11-23 01:03:14.576392 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.85s 2025-11-23 01:03:14.576399 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.65s 2025-11-23 01:03:14.576407 | orchestrator | designate : Restart designate-producer container ----------------------- 10.40s 2025-11-23 01:03:14.576419 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.11s 2025-11-23 01:03:14.576426 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.97s 2025-11-23 01:03:14.576434 | orchestrator | designate : Restart designate-api container ----------------------------- 6.69s 2025-11-23 01:03:14.576442 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.35s 2025-11-23 01:03:14.576449 | orchestrator | designate : Restart designate-worker container -------------------------- 6.20s 2025-11-23 01:03:14.576457 | orchestrator | designate : Copying over config.json files for services ----------------- 6.17s 2025-11-23 01:03:14.576465 | orchestrator | designate : Restart designate-central container ------------------------- 5.83s 2025-11-23 01:03:14.576472 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.23s 2025-11-23 01:03:14.576480 | orchestrator | designate : Check designate containers ---------------------------------- 5.13s 2025-11-23 01:03:14.576488 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.40s 2025-11-23 01:03:14.576496 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.39s 2025-11-23 01:03:14.576503 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.16s 2025-11-23 01:03:14.576511 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.82s 2025-11-23 01:03:14.576519 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.64s 2025-11-23 01:03:14.576532 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.46s 2025-11-23 01:03:14.576540 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.41s 2025-11-23 01:03:14.576547 | orchestrator | 2025-11-23 01:03:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:14.576555 | orchestrator | 2025-11-23 01:03:14 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:14.576563 | orchestrator | 2025-11-23 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:17.619130 | orchestrator | 2025-11-23 01:03:17 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:17.620816 | orchestrator | 2025-11-23 01:03:17 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:17.622835 | orchestrator | 2025-11-23 01:03:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:17.624613 | orchestrator | 2025-11-23 01:03:17 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:17.625080 | orchestrator | 2025-11-23 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:20.669014 | orchestrator | 2025-11-23 01:03:20 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:20.671242 | orchestrator | 2025-11-23 01:03:20 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:20.672902 | orchestrator | 2025-11-23 01:03:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:20.674060 | orchestrator | 2025-11-23 01:03:20 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:20.674278 | orchestrator | 2025-11-23 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:23.709092 | orchestrator | 2025-11-23 01:03:23 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:23.709512 | orchestrator | 2025-11-23 01:03:23 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:23.710715 | orchestrator | 2025-11-23 01:03:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:23.711587 | orchestrator | 2025-11-23 01:03:23 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:23.711619 | orchestrator | 2025-11-23 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:26.753725 | orchestrator | 2025-11-23 01:03:26 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:26.754699 | orchestrator | 2025-11-23 01:03:26 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:26.757049 | orchestrator | 2025-11-23 01:03:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:26.759581 | orchestrator | 2025-11-23 01:03:26 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:26.759615 | orchestrator | 2025-11-23 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:29.802495 | orchestrator | 2025-11-23 01:03:29 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:29.803603 | orchestrator | 2025-11-23 01:03:29 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:29.805683 | orchestrator | 2025-11-23 01:03:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:29.807422 | orchestrator | 2025-11-23 01:03:29 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:29.807517 | orchestrator | 2025-11-23 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:32.850630 | orchestrator | 2025-11-23 01:03:32 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:32.851857 | orchestrator | 2025-11-23 01:03:32 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:32.853631 | orchestrator | 2025-11-23 01:03:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:32.854914 | orchestrator | 2025-11-23 01:03:32 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:32.855468 | orchestrator | 2025-11-23 01:03:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:35.899332 | orchestrator | 2025-11-23 01:03:35 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:35.900763 | orchestrator | 2025-11-23 01:03:35 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:35.903052 | orchestrator | 2025-11-23 01:03:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:35.904579 | orchestrator | 2025-11-23 01:03:35 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:35.904709 | orchestrator | 2025-11-23 01:03:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:38.946756 | orchestrator | 2025-11-23 01:03:38 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:38.947995 | orchestrator | 2025-11-23 01:03:38 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:38.950333 | orchestrator | 2025-11-23 01:03:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:38.952549 | orchestrator | 2025-11-23 01:03:38 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:38.952623 | orchestrator | 2025-11-23 01:03:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:41.989898 | orchestrator | 2025-11-23 01:03:41 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:41.990751 | orchestrator | 2025-11-23 01:03:41 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:41.993875 | orchestrator | 2025-11-23 01:03:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:41.996157 | orchestrator | 2025-11-23 01:03:41 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:41.996187 | orchestrator | 2025-11-23 01:03:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:45.042171 | orchestrator | 2025-11-23 01:03:45 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:45.043205 | orchestrator | 2025-11-23 01:03:45 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state STARTED 2025-11-23 01:03:45.044905 | orchestrator | 2025-11-23 01:03:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:45.046580 | orchestrator | 2025-11-23 01:03:45 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:45.046616 | orchestrator | 2025-11-23 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:48.095012 | orchestrator | 2025-11-23 01:03:48.095116 | orchestrator | 2025-11-23 01:03:48.095132 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:03:48.095145 | orchestrator | 2025-11-23 01:03:48.095156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:03:48.095168 | orchestrator | Sunday 23 November 2025 01:02:37 +0000 (0:00:00.227) 0:00:00.227 ******* 2025-11-23 01:03:48.095201 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:03:48.095213 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:03:48.095224 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:03:48.095235 | orchestrator | 2025-11-23 01:03:48.095246 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:03:48.095256 | orchestrator | Sunday 23 November 2025 01:02:38 +0000 (0:00:00.274) 0:00:00.501 ******* 2025-11-23 01:03:48.095268 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-11-23 01:03:48.095279 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-11-23 01:03:48.095337 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-11-23 01:03:48.095349 | orchestrator | 2025-11-23 01:03:48.095359 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-11-23 01:03:48.095370 | orchestrator | 2025-11-23 01:03:48.095396 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-23 01:03:48.095407 | orchestrator | Sunday 23 November 2025 01:02:38 +0000 (0:00:00.358) 0:00:00.859 ******* 2025-11-23 01:03:48.095418 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:03:48.095430 | orchestrator | 2025-11-23 01:03:48.095441 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-11-23 01:03:48.095452 | orchestrator | Sunday 23 November 2025 01:02:39 +0000 (0:00:00.513) 0:00:01.372 ******* 2025-11-23 01:03:48.095463 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-11-23 01:03:48.095473 | orchestrator | 2025-11-23 01:03:48.095484 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-11-23 01:03:48.095495 | orchestrator | Sunday 23 November 2025 01:02:43 +0000 (0:00:03.883) 0:00:05.256 ******* 2025-11-23 01:03:48.095505 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-11-23 01:03:48.095516 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-11-23 01:03:48.095571 | orchestrator | 2025-11-23 01:03:48.095585 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-11-23 01:03:48.095596 | orchestrator | Sunday 23 November 2025 01:02:50 +0000 (0:00:07.501) 0:00:12.757 ******* 2025-11-23 01:03:48.095607 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:03:48.095618 | orchestrator | 2025-11-23 01:03:48.095629 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-11-23 01:03:48.095639 | orchestrator | Sunday 23 November 2025 01:02:54 +0000 (0:00:03.934) 0:00:16.692 ******* 2025-11-23 01:03:48.095650 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:03:48.095661 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-11-23 01:03:48.095671 | orchestrator | 2025-11-23 01:03:48.095682 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-11-23 01:03:48.095693 | orchestrator | Sunday 23 November 2025 01:02:58 +0000 (0:00:03.729) 0:00:20.422 ******* 2025-11-23 01:03:48.095743 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:03:48.095757 | orchestrator | 2025-11-23 01:03:48.095768 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-11-23 01:03:48.095778 | orchestrator | Sunday 23 November 2025 01:03:02 +0000 (0:00:03.843) 0:00:24.265 ******* 2025-11-23 01:03:48.095789 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-11-23 01:03:48.095799 | orchestrator | 2025-11-23 01:03:48.095810 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-23 01:03:48.095821 | orchestrator | Sunday 23 November 2025 01:03:06 +0000 (0:00:04.400) 0:00:28.666 ******* 2025-11-23 01:03:48.095831 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:48.095842 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:48.095853 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:48.095877 | orchestrator | 2025-11-23 01:03:48.095887 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-11-23 01:03:48.095898 | orchestrator | Sunday 23 November 2025 01:03:06 +0000 (0:00:00.261) 0:00:28.927 ******* 2025-11-23 01:03:48.095911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.095946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.095966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.095978 | orchestrator | 2025-11-23 01:03:48.095989 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-11-23 01:03:48.096000 | orchestrator | Sunday 23 November 2025 01:03:07 +0000 (0:00:00.881) 0:00:29.809 ******* 2025-11-23 01:03:48.096011 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:48.096021 | orchestrator | 2025-11-23 01:03:48.096032 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-11-23 01:03:48.096042 | orchestrator | Sunday 23 November 2025 01:03:07 +0000 (0:00:00.113) 0:00:29.923 ******* 2025-11-23 01:03:48.096053 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:48.096063 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:48.096074 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:48.096085 | orchestrator | 2025-11-23 01:03:48.096095 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-23 01:03:48.096106 | orchestrator | Sunday 23 November 2025 01:03:08 +0000 (0:00:00.467) 0:00:30.391 ******* 2025-11-23 01:03:48.096116 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:03:48.096133 | orchestrator | 2025-11-23 01:03:48.096144 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-11-23 01:03:48.096154 | orchestrator | Sunday 23 November 2025 01:03:08 +0000 (0:00:00.539) 0:00:30.930 ******* 2025-11-23 01:03:48.096165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096214 | orchestrator | 2025-11-23 01:03:48.096225 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-11-23 01:03:48.096235 | orchestrator | Sunday 23 November 2025 01:03:10 +0000 (0:00:01.622) 0:00:32.553 ******* 2025-11-23 01:03:48.096246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096265 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:48.096276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096333 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:48.096353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096365 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:48.096376 | orchestrator | 2025-11-23 01:03:48.096387 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-11-23 01:03:48.096397 | orchestrator | Sunday 23 November 2025 01:03:11 +0000 (0:00:00.958) 0:00:33.511 ******* 2025-11-23 01:03:48.096414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096445 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:48.096456 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:48.096467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096478 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:48.096489 | orchestrator | 2025-11-23 01:03:48.096499 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-11-23 01:03:48.096510 | orchestrator | Sunday 23 November 2025 01:03:12 +0000 (0:00:00.749) 0:00:34.261 ******* 2025-11-23 01:03:48.096535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096615 | orchestrator | 2025-11-23 01:03:48.096633 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-11-23 01:03:48.096644 | orchestrator | Sunday 23 November 2025 01:03:13 +0000 (0:00:01.283) 0:00:35.545 ******* 2025-11-23 01:03:48.096656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.096698 | orchestrator | 2025-11-23 01:03:48.096709 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-11-23 01:03:48.096720 | orchestrator | Sunday 23 November 2025 01:03:16 +0000 (0:00:02.716) 0:00:38.261 ******* 2025-11-23 01:03:48.096736 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-23 01:03:48.096747 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-23 01:03:48.096758 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-23 01:03:48.096769 | orchestrator | 2025-11-23 01:03:48.096786 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-11-23 01:03:48.096797 | orchestrator | Sunday 23 November 2025 01:03:17 +0000 (0:00:01.660) 0:00:39.921 ******* 2025-11-23 01:03:48.096807 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:48.096818 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:48.096829 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:48.096839 | orchestrator | 2025-11-23 01:03:48.096850 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-11-23 01:03:48.096860 | orchestrator | Sunday 23 November 2025 01:03:19 +0000 (0:00:01.549) 0:00:41.471 ******* 2025-11-23 01:03:48.096872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096883 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:03:48.096894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096905 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:03:48.096961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-23 01:03:48.096976 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:03:48.096987 | orchestrator | 2025-11-23 01:03:48.096997 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-11-23 01:03:48.097008 | orchestrator | Sunday 23 November 2025 01:03:19 +0000 (0:00:00.560) 0:00:42.031 ******* 2025-11-23 01:03:48.097024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.097051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.097063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-23 01:03:48.097074 | orchestrator | 2025-11-23 01:03:48.097085 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-11-23 01:03:48.097095 | orchestrator | Sunday 23 November 2025 01:03:21 +0000 (0:00:01.383) 0:00:43.415 ******* 2025-11-23 01:03:48.097106 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:48.097117 | orchestrator | 2025-11-23 01:03:48.097127 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-11-23 01:03:48.097138 | orchestrator | Sunday 23 November 2025 01:03:23 +0000 (0:00:02.775) 0:00:46.190 ******* 2025-11-23 01:03:48.097149 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:48.097159 | orchestrator | 2025-11-23 01:03:48.097170 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-11-23 01:03:48.097181 | orchestrator | Sunday 23 November 2025 01:03:26 +0000 (0:00:02.418) 0:00:48.608 ******* 2025-11-23 01:03:48.097192 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:48.097211 | orchestrator | 2025-11-23 01:03:48.097229 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-23 01:03:48.097247 | orchestrator | Sunday 23 November 2025 01:03:40 +0000 (0:00:14.393) 0:01:03.002 ******* 2025-11-23 01:03:48.097275 | orchestrator | 2025-11-23 01:03:48.097321 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-23 01:03:48.097340 | orchestrator | Sunday 23 November 2025 01:03:40 +0000 (0:00:00.057) 0:01:03.060 ******* 2025-11-23 01:03:48.097358 | orchestrator | 2025-11-23 01:03:48.097386 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-23 01:03:48.097416 | orchestrator | Sunday 23 November 2025 01:03:40 +0000 (0:00:00.055) 0:01:03.116 ******* 2025-11-23 01:03:48.097434 | orchestrator | 2025-11-23 01:03:48.097453 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-11-23 01:03:48.097470 | orchestrator | Sunday 23 November 2025 01:03:40 +0000 (0:00:00.059) 0:01:03.176 ******* 2025-11-23 01:03:48.097488 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:03:48.097505 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:03:48.097523 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:03:48.097541 | orchestrator | 2025-11-23 01:03:48.097560 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:03:48.097579 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 01:03:48.097599 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 01:03:48.097625 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 01:03:48.097643 | orchestrator | 2025-11-23 01:03:48.097661 | orchestrator | 2025-11-23 01:03:48.097678 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:03:48.097697 | orchestrator | Sunday 23 November 2025 01:03:46 +0000 (0:00:05.559) 0:01:08.735 ******* 2025-11-23 01:03:48.097716 | orchestrator | =============================================================================== 2025-11-23 01:03:48.097732 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.39s 2025-11-23 01:03:48.097750 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.50s 2025-11-23 01:03:48.097767 | orchestrator | placement : Restart placement-api container ----------------------------- 5.56s 2025-11-23 01:03:48.097784 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.40s 2025-11-23 01:03:48.097802 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.93s 2025-11-23 01:03:48.097819 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.88s 2025-11-23 01:03:48.097836 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.84s 2025-11-23 01:03:48.097854 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.73s 2025-11-23 01:03:48.097872 | orchestrator | placement : Creating placement databases -------------------------------- 2.78s 2025-11-23 01:03:48.097890 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.72s 2025-11-23 01:03:48.097910 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.42s 2025-11-23 01:03:48.097929 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.66s 2025-11-23 01:03:48.097948 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.62s 2025-11-23 01:03:48.097965 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.55s 2025-11-23 01:03:48.097983 | orchestrator | placement : Check placement containers ---------------------------------- 1.38s 2025-11-23 01:03:48.098002 | orchestrator | placement : Copying over config.json files for services ----------------- 1.28s 2025-11-23 01:03:48.098086 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.96s 2025-11-23 01:03:48.098114 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.88s 2025-11-23 01:03:48.098133 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.75s 2025-11-23 01:03:48.098152 | orchestrator | placement : Copying over existing policy file --------------------------- 0.56s 2025-11-23 01:03:48.098170 | orchestrator | 2025-11-23 01:03:48 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:48.098204 | orchestrator | 2025-11-23 01:03:48 | INFO  | Task e3bae0fd-a175-40ac-80fe-361f82357655 is in state SUCCESS 2025-11-23 01:03:48.098478 | orchestrator | 2025-11-23 01:03:48 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:03:48.098568 | orchestrator | 2025-11-23 01:03:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:48.098700 | orchestrator | 2025-11-23 01:03:48 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:48.099764 | orchestrator | 2025-11-23 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:51.139617 | orchestrator | 2025-11-23 01:03:51 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:51.141318 | orchestrator | 2025-11-23 01:03:51 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:03:51.143671 | orchestrator | 2025-11-23 01:03:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:51.144706 | orchestrator | 2025-11-23 01:03:51 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:51.144802 | orchestrator | 2025-11-23 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:54.174567 | orchestrator | 2025-11-23 01:03:54 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:54.175120 | orchestrator | 2025-11-23 01:03:54 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:03:54.176090 | orchestrator | 2025-11-23 01:03:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:54.176864 | orchestrator | 2025-11-23 01:03:54 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:54.176891 | orchestrator | 2025-11-23 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:03:57.218460 | orchestrator | 2025-11-23 01:03:57 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:03:57.219691 | orchestrator | 2025-11-23 01:03:57 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:03:57.221436 | orchestrator | 2025-11-23 01:03:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:03:57.222587 | orchestrator | 2025-11-23 01:03:57 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:03:57.222857 | orchestrator | 2025-11-23 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:00.251486 | orchestrator | 2025-11-23 01:04:00 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:00.251584 | orchestrator | 2025-11-23 01:04:00 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:00.252496 | orchestrator | 2025-11-23 01:04:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:00.252904 | orchestrator | 2025-11-23 01:04:00 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:00.252928 | orchestrator | 2025-11-23 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:03.287394 | orchestrator | 2025-11-23 01:04:03 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:03.288009 | orchestrator | 2025-11-23 01:04:03 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:03.289347 | orchestrator | 2025-11-23 01:04:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:03.290618 | orchestrator | 2025-11-23 01:04:03 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:03.290683 | orchestrator | 2025-11-23 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:06.330841 | orchestrator | 2025-11-23 01:04:06 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:06.332507 | orchestrator | 2025-11-23 01:04:06 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:06.335499 | orchestrator | 2025-11-23 01:04:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:06.338344 | orchestrator | 2025-11-23 01:04:06 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:06.338533 | orchestrator | 2025-11-23 01:04:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:09.379983 | orchestrator | 2025-11-23 01:04:09 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:09.380085 | orchestrator | 2025-11-23 01:04:09 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:09.381369 | orchestrator | 2025-11-23 01:04:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:09.382382 | orchestrator | 2025-11-23 01:04:09 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:09.382603 | orchestrator | 2025-11-23 01:04:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:12.413348 | orchestrator | 2025-11-23 01:04:12 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:12.413783 | orchestrator | 2025-11-23 01:04:12 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:12.415220 | orchestrator | 2025-11-23 01:04:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:12.416081 | orchestrator | 2025-11-23 01:04:12 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:12.416396 | orchestrator | 2025-11-23 01:04:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:15.449439 | orchestrator | 2025-11-23 01:04:15 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:15.451582 | orchestrator | 2025-11-23 01:04:15 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:15.451824 | orchestrator | 2025-11-23 01:04:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:15.452748 | orchestrator | 2025-11-23 01:04:15 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:15.452780 | orchestrator | 2025-11-23 01:04:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:18.489156 | orchestrator | 2025-11-23 01:04:18 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:18.489244 | orchestrator | 2025-11-23 01:04:18 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:18.489587 | orchestrator | 2025-11-23 01:04:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:18.490649 | orchestrator | 2025-11-23 01:04:18 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:18.490666 | orchestrator | 2025-11-23 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:21.519397 | orchestrator | 2025-11-23 01:04:21 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:21.520972 | orchestrator | 2025-11-23 01:04:21 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:21.522067 | orchestrator | 2025-11-23 01:04:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:21.523210 | orchestrator | 2025-11-23 01:04:21 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:21.523478 | orchestrator | 2025-11-23 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:24.552813 | orchestrator | 2025-11-23 01:04:24 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:24.552905 | orchestrator | 2025-11-23 01:04:24 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:24.553611 | orchestrator | 2025-11-23 01:04:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:24.554560 | orchestrator | 2025-11-23 01:04:24 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:24.554637 | orchestrator | 2025-11-23 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:27.593335 | orchestrator | 2025-11-23 01:04:27 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:27.593439 | orchestrator | 2025-11-23 01:04:27 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:27.594201 | orchestrator | 2025-11-23 01:04:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:27.594928 | orchestrator | 2025-11-23 01:04:27 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:27.594972 | orchestrator | 2025-11-23 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:30.637177 | orchestrator | 2025-11-23 01:04:30 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:30.638650 | orchestrator | 2025-11-23 01:04:30 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:30.640358 | orchestrator | 2025-11-23 01:04:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:30.642153 | orchestrator | 2025-11-23 01:04:30 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:30.642181 | orchestrator | 2025-11-23 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:33.686206 | orchestrator | 2025-11-23 01:04:33 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:33.687044 | orchestrator | 2025-11-23 01:04:33 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:33.688567 | orchestrator | 2025-11-23 01:04:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:33.690711 | orchestrator | 2025-11-23 01:04:33 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:33.690741 | orchestrator | 2025-11-23 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:36.733145 | orchestrator | 2025-11-23 01:04:36 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:36.733865 | orchestrator | 2025-11-23 01:04:36 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:36.735247 | orchestrator | 2025-11-23 01:04:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:36.736536 | orchestrator | 2025-11-23 01:04:36 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:36.736674 | orchestrator | 2025-11-23 01:04:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:39.774883 | orchestrator | 2025-11-23 01:04:39 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:39.775480 | orchestrator | 2025-11-23 01:04:39 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:39.776308 | orchestrator | 2025-11-23 01:04:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:39.777368 | orchestrator | 2025-11-23 01:04:39 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:39.777401 | orchestrator | 2025-11-23 01:04:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:42.821263 | orchestrator | 2025-11-23 01:04:42 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:42.824303 | orchestrator | 2025-11-23 01:04:42 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:42.826076 | orchestrator | 2025-11-23 01:04:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:42.827553 | orchestrator | 2025-11-23 01:04:42 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:42.827591 | orchestrator | 2025-11-23 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:45.859973 | orchestrator | 2025-11-23 01:04:45 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:45.860190 | orchestrator | 2025-11-23 01:04:45 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:45.861083 | orchestrator | 2025-11-23 01:04:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:45.861747 | orchestrator | 2025-11-23 01:04:45 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:45.861774 | orchestrator | 2025-11-23 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:48.888209 | orchestrator | 2025-11-23 01:04:48 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:48.888539 | orchestrator | 2025-11-23 01:04:48 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:48.889052 | orchestrator | 2025-11-23 01:04:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:48.889584 | orchestrator | 2025-11-23 01:04:48 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:48.889609 | orchestrator | 2025-11-23 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:51.909598 | orchestrator | 2025-11-23 01:04:51 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:51.909843 | orchestrator | 2025-11-23 01:04:51 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:51.910463 | orchestrator | 2025-11-23 01:04:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:51.911133 | orchestrator | 2025-11-23 01:04:51 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:51.911157 | orchestrator | 2025-11-23 01:04:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:54.935911 | orchestrator | 2025-11-23 01:04:54 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:54.937073 | orchestrator | 2025-11-23 01:04:54 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:54.937675 | orchestrator | 2025-11-23 01:04:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:54.938317 | orchestrator | 2025-11-23 01:04:54 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:54.938378 | orchestrator | 2025-11-23 01:04:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:04:57.977668 | orchestrator | 2025-11-23 01:04:57 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:04:57.978866 | orchestrator | 2025-11-23 01:04:57 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:04:57.980945 | orchestrator | 2025-11-23 01:04:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:04:57.981986 | orchestrator | 2025-11-23 01:04:57 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:04:57.982247 | orchestrator | 2025-11-23 01:04:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:01.025018 | orchestrator | 2025-11-23 01:05:01 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:01.026666 | orchestrator | 2025-11-23 01:05:01 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:01.028948 | orchestrator | 2025-11-23 01:05:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:01.033418 | orchestrator | 2025-11-23 01:05:01 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state STARTED 2025-11-23 01:05:01.034661 | orchestrator | 2025-11-23 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:04.076622 | orchestrator | 2025-11-23 01:05:04 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:04.077936 | orchestrator | 2025-11-23 01:05:04 | INFO  | Task f28da9c0-0979-45cc-82ff-84d0f3363016 is in state STARTED 2025-11-23 01:05:04.080497 | orchestrator | 2025-11-23 01:05:04 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:04.084231 | orchestrator | 2025-11-23 01:05:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:04.088074 | orchestrator | 2025-11-23 01:05:04 | INFO  | Task 0510f44f-d18b-4e4e-83ac-8bf4323c4799 is in state SUCCESS 2025-11-23 01:05:04.088243 | orchestrator | 2025-11-23 01:05:04.090705 | orchestrator | 2025-11-23 01:05:04.090830 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:05:04.090850 | orchestrator | 2025-11-23 01:05:04.090862 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:05:04.090873 | orchestrator | Sunday 23 November 2025 01:03:17 +0000 (0:00:00.396) 0:00:00.396 ******* 2025-11-23 01:05:04.090885 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:04.090897 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:05:04.090908 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:05:04.090918 | orchestrator | 2025-11-23 01:05:04.090929 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:05:04.090941 | orchestrator | Sunday 23 November 2025 01:03:18 +0000 (0:00:00.483) 0:00:00.879 ******* 2025-11-23 01:05:04.090952 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-11-23 01:05:04.090963 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-11-23 01:05:04.090974 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-11-23 01:05:04.090985 | orchestrator | 2025-11-23 01:05:04.090996 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-11-23 01:05:04.091007 | orchestrator | 2025-11-23 01:05:04.091018 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-23 01:05:04.091028 | orchestrator | Sunday 23 November 2025 01:03:18 +0000 (0:00:00.467) 0:00:01.347 ******* 2025-11-23 01:05:04.091043 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:04.091062 | orchestrator | 2025-11-23 01:05:04.091082 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-11-23 01:05:04.091124 | orchestrator | Sunday 23 November 2025 01:03:19 +0000 (0:00:00.546) 0:00:01.893 ******* 2025-11-23 01:05:04.091137 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-11-23 01:05:04.091147 | orchestrator | 2025-11-23 01:05:04.091158 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-11-23 01:05:04.091169 | orchestrator | Sunday 23 November 2025 01:03:22 +0000 (0:00:03.751) 0:00:05.644 ******* 2025-11-23 01:05:04.091180 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-11-23 01:05:04.091191 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-11-23 01:05:04.091202 | orchestrator | 2025-11-23 01:05:04.091212 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-11-23 01:05:04.091223 | orchestrator | Sunday 23 November 2025 01:03:29 +0000 (0:00:06.620) 0:00:12.265 ******* 2025-11-23 01:05:04.091234 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:05:04.091245 | orchestrator | 2025-11-23 01:05:04.091256 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-11-23 01:05:04.091269 | orchestrator | Sunday 23 November 2025 01:03:33 +0000 (0:00:03.513) 0:00:15.779 ******* 2025-11-23 01:05:04.091306 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:05:04.091319 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-11-23 01:05:04.091333 | orchestrator | 2025-11-23 01:05:04.091347 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-11-23 01:05:04.091359 | orchestrator | Sunday 23 November 2025 01:03:37 +0000 (0:00:04.168) 0:00:19.947 ******* 2025-11-23 01:05:04.091369 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:05:04.091380 | orchestrator | 2025-11-23 01:05:04.091390 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-11-23 01:05:04.091401 | orchestrator | Sunday 23 November 2025 01:03:40 +0000 (0:00:03.528) 0:00:23.476 ******* 2025-11-23 01:05:04.091412 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-11-23 01:05:04.091423 | orchestrator | 2025-11-23 01:05:04.091434 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-11-23 01:05:04.091444 | orchestrator | Sunday 23 November 2025 01:03:44 +0000 (0:00:04.082) 0:00:27.559 ******* 2025-11-23 01:05:04.091455 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.091466 | orchestrator | 2025-11-23 01:05:04.091476 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-11-23 01:05:04.091487 | orchestrator | Sunday 23 November 2025 01:03:48 +0000 (0:00:03.439) 0:00:30.998 ******* 2025-11-23 01:05:04.091498 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.091508 | orchestrator | 2025-11-23 01:05:04.091519 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-11-23 01:05:04.091530 | orchestrator | Sunday 23 November 2025 01:03:52 +0000 (0:00:04.144) 0:00:35.143 ******* 2025-11-23 01:05:04.091540 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.091551 | orchestrator | 2025-11-23 01:05:04.091561 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-11-23 01:05:04.091572 | orchestrator | Sunday 23 November 2025 01:03:56 +0000 (0:00:03.768) 0:00:38.912 ******* 2025-11-23 01:05:04.091613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.091638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.091650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.091662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.091679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.091697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.091715 | orchestrator | 2025-11-23 01:05:04.091726 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-11-23 01:05:04.091737 | orchestrator | Sunday 23 November 2025 01:03:57 +0000 (0:00:01.504) 0:00:40.416 ******* 2025-11-23 01:05:04.091748 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:04.091759 | orchestrator | 2025-11-23 01:05:04.091769 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-11-23 01:05:04.091780 | orchestrator | Sunday 23 November 2025 01:03:57 +0000 (0:00:00.121) 0:00:40.537 ******* 2025-11-23 01:05:04.091791 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:04.091801 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:04.091812 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:04.091823 | orchestrator | 2025-11-23 01:05:04.091833 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-11-23 01:05:04.091844 | orchestrator | Sunday 23 November 2025 01:03:58 +0000 (0:00:00.360) 0:00:40.898 ******* 2025-11-23 01:05:04.091855 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 01:05:04.091865 | orchestrator | 2025-11-23 01:05:04.091875 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-11-23 01:05:04.091886 | orchestrator | Sunday 23 November 2025 01:03:59 +0000 (0:00:00.957) 0:00:41.855 ******* 2025-11-23 01:05:04.091897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.091909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.091925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.091952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.091964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.091975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.091986 | orchestrator | 2025-11-23 01:05:04.091997 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-11-23 01:05:04.092008 | orchestrator | Sunday 23 November 2025 01:04:01 +0000 (0:00:02.579) 0:00:44.435 ******* 2025-11-23 01:05:04.092019 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:04.092030 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:05:04.092041 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:05:04.092051 | orchestrator | 2025-11-23 01:05:04.092062 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-23 01:05:04.092073 | orchestrator | Sunday 23 November 2025 01:04:02 +0000 (0:00:00.317) 0:00:44.752 ******* 2025-11-23 01:05:04.092084 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:04.092094 | orchestrator | 2025-11-23 01:05:04.092105 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-11-23 01:05:04.092116 | orchestrator | Sunday 23 November 2025 01:04:02 +0000 (0:00:00.606) 0:00:45.359 ******* 2025-11-23 01:05:04.092127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.092154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.092166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.092178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.092189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.092200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.092218 | orchestrator | 2025-11-23 01:05:04.092229 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-11-23 01:05:04.092240 | orchestrator | Sunday 23 November 2025 01:04:04 +0000 (0:00:02.273) 0:00:47.632 ******* 2025-11-23 01:05:04.092262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.092298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.092311 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:04.092323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.092335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.092354 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:04.092370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.092388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.092400 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:04.092411 | orchestrator | 2025-11-23 01:05:04.092422 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-11-23 01:05:04.092432 | orchestrator | Sunday 23 November 2025 01:04:05 +0000 (0:00:00.635) 0:00:48.267 ******* 2025-11-23 01:05:04.092443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.092455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.092466 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:04.092477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.092505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.092516 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:04.092534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.092546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.092557 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:04.092568 | orchestrator | 2025-11-23 01:05:04.092579 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-11-23 01:05:04.092590 | orchestrator | Sunday 23 November 2025 01:04:06 +0000 (0:00:00.875) 0:00:49.143 ******* 2025-11-23 01:05:04.092601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.092618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.092901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.092922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.092934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.092945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.092965 | orchestrator | 2025-11-23 01:05:04.092976 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-11-23 01:05:04.092987 | orchestrator | Sunday 23 November 2025 01:04:08 +0000 (0:00:02.296) 0:00:51.439 ******* 2025-11-23 01:05:04.092998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.093022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.093044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.093064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.093099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.093119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.093133 | orchestrator | 2025-11-23 01:05:04.093144 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-11-23 01:05:04.093155 | orchestrator | Sunday 23 November 2025 01:04:13 +0000 (0:00:04.515) 0:00:55.955 ******* 2025-11-23 01:05:04.093177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.093190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.093201 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:04.093212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.093231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.093242 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:04.093257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-23 01:05:04.093322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:04.093365 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:04.093377 | orchestrator | 2025-11-23 01:05:04.093389 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-11-23 01:05:04.093399 | orchestrator | Sunday 23 November 2025 01:04:13 +0000 (0:00:00.548) 0:00:56.503 ******* 2025-11-23 01:05:04.093411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.093430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.093441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-23 01:05:04.093457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.093477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.093491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:04.093511 | orchestrator | 2025-11-23 01:05:04.093524 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-23 01:05:04.093537 | orchestrator | Sunday 23 November 2025 01:04:16 +0000 (0:00:02.217) 0:00:58.720 ******* 2025-11-23 01:05:04.093551 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:04.093564 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:04.093577 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:04.093590 | orchestrator | 2025-11-23 01:05:04.093604 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-11-23 01:05:04.093616 | orchestrator | Sunday 23 November 2025 01:04:16 +0000 (0:00:00.274) 0:00:58.995 ******* 2025-11-23 01:05:04.093628 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.093641 | orchestrator | 2025-11-23 01:05:04.093654 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-11-23 01:05:04.093667 | orchestrator | Sunday 23 November 2025 01:04:18 +0000 (0:00:02.180) 0:01:01.176 ******* 2025-11-23 01:05:04.093680 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.093743 | orchestrator | 2025-11-23 01:05:04.093758 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-11-23 01:05:04.093771 | orchestrator | Sunday 23 November 2025 01:04:20 +0000 (0:00:02.312) 0:01:03.489 ******* 2025-11-23 01:05:04.093784 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.093797 | orchestrator | 2025-11-23 01:05:04.093811 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-23 01:05:04.093822 | orchestrator | Sunday 23 November 2025 01:04:37 +0000 (0:00:16.251) 0:01:19.741 ******* 2025-11-23 01:05:04.093833 | orchestrator | 2025-11-23 01:05:04.093843 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-23 01:05:04.093854 | orchestrator | Sunday 23 November 2025 01:04:37 +0000 (0:00:00.057) 0:01:19.798 ******* 2025-11-23 01:05:04.093865 | orchestrator | 2025-11-23 01:05:04.093875 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-23 01:05:04.093886 | orchestrator | Sunday 23 November 2025 01:04:37 +0000 (0:00:00.059) 0:01:19.858 ******* 2025-11-23 01:05:04.093897 | orchestrator | 2025-11-23 01:05:04.093907 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-11-23 01:05:04.093918 | orchestrator | Sunday 23 November 2025 01:04:37 +0000 (0:00:00.061) 0:01:19.920 ******* 2025-11-23 01:05:04.093928 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.093939 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:04.093950 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:04.093961 | orchestrator | 2025-11-23 01:05:04.093971 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-11-23 01:05:04.093982 | orchestrator | Sunday 23 November 2025 01:04:51 +0000 (0:00:14.085) 0:01:34.005 ******* 2025-11-23 01:05:04.093993 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:04.094003 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:04.094014 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:04.094075 | orchestrator | 2025-11-23 01:05:04.094087 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:05:04.094098 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-23 01:05:04.094116 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 01:05:04.094127 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 01:05:04.094146 | orchestrator | 2025-11-23 01:05:04.094157 | orchestrator | 2025-11-23 01:05:04.094168 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:05:04.094179 | orchestrator | Sunday 23 November 2025 01:05:02 +0000 (0:00:11.476) 0:01:45.481 ******* 2025-11-23 01:05:04.094189 | orchestrator | =============================================================================== 2025-11-23 01:05:04.094200 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.25s 2025-11-23 01:05:04.094220 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.09s 2025-11-23 01:05:04.094232 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.48s 2025-11-23 01:05:04.094243 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.62s 2025-11-23 01:05:04.094254 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.52s 2025-11-23 01:05:04.094264 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.17s 2025-11-23 01:05:04.094298 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.14s 2025-11-23 01:05:04.094309 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.08s 2025-11-23 01:05:04.094319 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.77s 2025-11-23 01:05:04.094329 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.75s 2025-11-23 01:05:04.094338 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.53s 2025-11-23 01:05:04.094348 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.51s 2025-11-23 01:05:04.094358 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.44s 2025-11-23 01:05:04.094367 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.58s 2025-11-23 01:05:04.094377 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.31s 2025-11-23 01:05:04.094387 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.30s 2025-11-23 01:05:04.094396 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.27s 2025-11-23 01:05:04.094406 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.22s 2025-11-23 01:05:04.094415 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.18s 2025-11-23 01:05:04.094425 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.50s 2025-11-23 01:05:04.094435 | orchestrator | 2025-11-23 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:07.132954 | orchestrator | 2025-11-23 01:05:07 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:07.134422 | orchestrator | 2025-11-23 01:05:07 | INFO  | Task f28da9c0-0979-45cc-82ff-84d0f3363016 is in state STARTED 2025-11-23 01:05:07.137346 | orchestrator | 2025-11-23 01:05:07 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:07.140104 | orchestrator | 2025-11-23 01:05:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:07.140685 | orchestrator | 2025-11-23 01:05:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:10.188482 | orchestrator | 2025-11-23 01:05:10 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:10.190463 | orchestrator | 2025-11-23 01:05:10 | INFO  | Task f28da9c0-0979-45cc-82ff-84d0f3363016 is in state SUCCESS 2025-11-23 01:05:10.192960 | orchestrator | 2025-11-23 01:05:10 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:10.195107 | orchestrator | 2025-11-23 01:05:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:10.197200 | orchestrator | 2025-11-23 01:05:10 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:10.197588 | orchestrator | 2025-11-23 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:13.234439 | orchestrator | 2025-11-23 01:05:13 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:13.235894 | orchestrator | 2025-11-23 01:05:13 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:13.237587 | orchestrator | 2025-11-23 01:05:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:13.239232 | orchestrator | 2025-11-23 01:05:13 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:13.239587 | orchestrator | 2025-11-23 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:16.282105 | orchestrator | 2025-11-23 01:05:16 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:16.284118 | orchestrator | 2025-11-23 01:05:16 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:16.286495 | orchestrator | 2025-11-23 01:05:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:16.288556 | orchestrator | 2025-11-23 01:05:16 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:16.288575 | orchestrator | 2025-11-23 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:19.337909 | orchestrator | 2025-11-23 01:05:19 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:19.339788 | orchestrator | 2025-11-23 01:05:19 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:19.342117 | orchestrator | 2025-11-23 01:05:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:19.344423 | orchestrator | 2025-11-23 01:05:19 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:19.344446 | orchestrator | 2025-11-23 01:05:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:22.388014 | orchestrator | 2025-11-23 01:05:22 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:22.389094 | orchestrator | 2025-11-23 01:05:22 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:22.390749 | orchestrator | 2025-11-23 01:05:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:22.392096 | orchestrator | 2025-11-23 01:05:22 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:22.392134 | orchestrator | 2025-11-23 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:25.437533 | orchestrator | 2025-11-23 01:05:25 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:25.439935 | orchestrator | 2025-11-23 01:05:25 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:25.443200 | orchestrator | 2025-11-23 01:05:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:25.444481 | orchestrator | 2025-11-23 01:05:25 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:25.444516 | orchestrator | 2025-11-23 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:28.485988 | orchestrator | 2025-11-23 01:05:28 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:28.488362 | orchestrator | 2025-11-23 01:05:28 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:28.490160 | orchestrator | 2025-11-23 01:05:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:28.491569 | orchestrator | 2025-11-23 01:05:28 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:28.492362 | orchestrator | 2025-11-23 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:31.558779 | orchestrator | 2025-11-23 01:05:31 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state STARTED 2025-11-23 01:05:31.562163 | orchestrator | 2025-11-23 01:05:31 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:31.564461 | orchestrator | 2025-11-23 01:05:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:31.567134 | orchestrator | 2025-11-23 01:05:31 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:31.567224 | orchestrator | 2025-11-23 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:34.621615 | orchestrator | 2025-11-23 01:05:34 | INFO  | Task f8fa334c-4c0c-499f-ac20-976431505e7a is in state SUCCESS 2025-11-23 01:05:34.622677 | orchestrator | 2025-11-23 01:05:34.622755 | orchestrator | 2025-11-23 01:05:34.622771 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:05:34.622783 | orchestrator | 2025-11-23 01:05:34.622795 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:05:34.622806 | orchestrator | Sunday 23 November 2025 01:05:06 +0000 (0:00:00.153) 0:00:00.153 ******* 2025-11-23 01:05:34.622817 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.622830 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:05:34.622841 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:05:34.622852 | orchestrator | 2025-11-23 01:05:34.622863 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:05:34.622874 | orchestrator | Sunday 23 November 2025 01:05:06 +0000 (0:00:00.255) 0:00:00.409 ******* 2025-11-23 01:05:34.622900 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-11-23 01:05:34.622912 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-11-23 01:05:34.622923 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-11-23 01:05:34.622934 | orchestrator | 2025-11-23 01:05:34.622945 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-11-23 01:05:34.622955 | orchestrator | 2025-11-23 01:05:34.622966 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-11-23 01:05:34.622977 | orchestrator | Sunday 23 November 2025 01:05:07 +0000 (0:00:00.631) 0:00:01.040 ******* 2025-11-23 01:05:34.622987 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:05:34.622998 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:05:34.623130 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.623142 | orchestrator | 2025-11-23 01:05:34.623153 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:05:34.623165 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 01:05:34.623179 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 01:05:34.623190 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 01:05:34.623308 | orchestrator | 2025-11-23 01:05:34.623335 | orchestrator | 2025-11-23 01:05:34.623356 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:05:34.623377 | orchestrator | Sunday 23 November 2025 01:05:08 +0000 (0:00:00.630) 0:00:01.671 ******* 2025-11-23 01:05:34.623393 | orchestrator | =============================================================================== 2025-11-23 01:05:34.623430 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.63s 2025-11-23 01:05:34.623443 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-11-23 01:05:34.623456 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-11-23 01:05:34.623468 | orchestrator | 2025-11-23 01:05:34.623480 | orchestrator | 2025-11-23 01:05:34.623493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:05:34.623505 | orchestrator | 2025-11-23 01:05:34.623516 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-11-23 01:05:34.623527 | orchestrator | Sunday 23 November 2025 00:56:34 +0000 (0:00:00.248) 0:00:00.249 ******* 2025-11-23 01:05:34.623537 | orchestrator | changed: [testbed-manager] 2025-11-23 01:05:34.623549 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.623559 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.623570 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.623580 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.623590 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.623601 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.623665 | orchestrator | 2025-11-23 01:05:34.623677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:05:34.623688 | orchestrator | Sunday 23 November 2025 00:56:34 +0000 (0:00:00.796) 0:00:01.045 ******* 2025-11-23 01:05:34.623699 | orchestrator | changed: [testbed-manager] 2025-11-23 01:05:34.623709 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.623720 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.623730 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.623751 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.623763 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.623773 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.623805 | orchestrator | 2025-11-23 01:05:34.623817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:05:34.623827 | orchestrator | Sunday 23 November 2025 00:56:35 +0000 (0:00:00.612) 0:00:01.658 ******* 2025-11-23 01:05:34.623838 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-11-23 01:05:34.623849 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-11-23 01:05:34.623860 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-11-23 01:05:34.623871 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-11-23 01:05:34.623882 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-11-23 01:05:34.623893 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-11-23 01:05:34.623916 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-11-23 01:05:34.623927 | orchestrator | 2025-11-23 01:05:34.623937 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-11-23 01:05:34.623948 | orchestrator | 2025-11-23 01:05:34.623959 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-23 01:05:34.624006 | orchestrator | Sunday 23 November 2025 00:56:36 +0000 (0:00:00.754) 0:00:02.413 ******* 2025-11-23 01:05:34.624017 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:34.624028 | orchestrator | 2025-11-23 01:05:34.624038 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-11-23 01:05:34.624049 | orchestrator | Sunday 23 November 2025 00:56:36 +0000 (0:00:00.599) 0:00:03.012 ******* 2025-11-23 01:05:34.624060 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-11-23 01:05:34.624089 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-11-23 01:05:34.624101 | orchestrator | 2025-11-23 01:05:34.624111 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-11-23 01:05:34.624122 | orchestrator | Sunday 23 November 2025 00:56:41 +0000 (0:00:04.590) 0:00:07.602 ******* 2025-11-23 01:05:34.624133 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 01:05:34.624153 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-23 01:05:34.624164 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.624175 | orchestrator | 2025-11-23 01:05:34.624185 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-23 01:05:34.624196 | orchestrator | Sunday 23 November 2025 00:56:45 +0000 (0:00:04.279) 0:00:11.882 ******* 2025-11-23 01:05:34.624214 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.624225 | orchestrator | 2025-11-23 01:05:34.624236 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-11-23 01:05:34.624247 | orchestrator | Sunday 23 November 2025 00:56:46 +0000 (0:00:00.810) 0:00:12.692 ******* 2025-11-23 01:05:34.624258 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.624371 | orchestrator | 2025-11-23 01:05:34.624383 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-11-23 01:05:34.624394 | orchestrator | Sunday 23 November 2025 00:56:47 +0000 (0:00:01.375) 0:00:14.067 ******* 2025-11-23 01:05:34.624404 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.624415 | orchestrator | 2025-11-23 01:05:34.624426 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-23 01:05:34.624437 | orchestrator | Sunday 23 November 2025 00:56:50 +0000 (0:00:02.633) 0:00:16.701 ******* 2025-11-23 01:05:34.624447 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.624458 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.624469 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.624480 | orchestrator | 2025-11-23 01:05:34.624490 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-23 01:05:34.624501 | orchestrator | Sunday 23 November 2025 00:56:50 +0000 (0:00:00.289) 0:00:16.990 ******* 2025-11-23 01:05:34.624537 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.624549 | orchestrator | 2025-11-23 01:05:34.624559 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-11-23 01:05:34.624570 | orchestrator | Sunday 23 November 2025 00:57:23 +0000 (0:00:32.761) 0:00:49.753 ******* 2025-11-23 01:05:34.624581 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.624592 | orchestrator | 2025-11-23 01:05:34.624602 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-23 01:05:34.624613 | orchestrator | Sunday 23 November 2025 00:57:39 +0000 (0:00:15.732) 0:01:05.485 ******* 2025-11-23 01:05:34.624634 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.624645 | orchestrator | 2025-11-23 01:05:34.624655 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-23 01:05:34.624666 | orchestrator | Sunday 23 November 2025 00:57:53 +0000 (0:00:14.079) 0:01:19.565 ******* 2025-11-23 01:05:34.624677 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.624687 | orchestrator | 2025-11-23 01:05:34.624698 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-11-23 01:05:34.624708 | orchestrator | Sunday 23 November 2025 00:57:54 +0000 (0:00:00.816) 0:01:20.381 ******* 2025-11-23 01:05:34.624719 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.624748 | orchestrator | 2025-11-23 01:05:34.624759 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-23 01:05:34.624803 | orchestrator | Sunday 23 November 2025 00:57:54 +0000 (0:00:00.399) 0:01:20.781 ******* 2025-11-23 01:05:34.624816 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:34.624858 | orchestrator | 2025-11-23 01:05:34.624869 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-23 01:05:34.624879 | orchestrator | Sunday 23 November 2025 00:57:55 +0000 (0:00:00.429) 0:01:21.210 ******* 2025-11-23 01:05:34.624890 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.624900 | orchestrator | 2025-11-23 01:05:34.624911 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-23 01:05:34.624922 | orchestrator | Sunday 23 November 2025 00:58:14 +0000 (0:00:19.441) 0:01:40.651 ******* 2025-11-23 01:05:34.624942 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.624976 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.624987 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625009 | orchestrator | 2025-11-23 01:05:34.625021 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-11-23 01:05:34.625031 | orchestrator | 2025-11-23 01:05:34.625042 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-23 01:05:34.625053 | orchestrator | Sunday 23 November 2025 00:58:15 +0000 (0:00:00.474) 0:01:41.126 ******* 2025-11-23 01:05:34.625112 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:34.625123 | orchestrator | 2025-11-23 01:05:34.625134 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-11-23 01:05:34.625144 | orchestrator | Sunday 23 November 2025 00:58:15 +0000 (0:00:00.735) 0:01:41.861 ******* 2025-11-23 01:05:34.625155 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625166 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625177 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.625187 | orchestrator | 2025-11-23 01:05:34.625198 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-11-23 01:05:34.625209 | orchestrator | Sunday 23 November 2025 00:58:18 +0000 (0:00:02.265) 0:01:44.126 ******* 2025-11-23 01:05:34.625219 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625230 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625241 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.625252 | orchestrator | 2025-11-23 01:05:34.625413 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-23 01:05:34.625448 | orchestrator | Sunday 23 November 2025 00:58:20 +0000 (0:00:02.245) 0:01:46.372 ******* 2025-11-23 01:05:34.625459 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.625470 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625495 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625506 | orchestrator | 2025-11-23 01:05:34.625542 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-23 01:05:34.625554 | orchestrator | Sunday 23 November 2025 00:58:20 +0000 (0:00:00.332) 0:01:46.704 ******* 2025-11-23 01:05:34.625565 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-23 01:05:34.625586 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625596 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-23 01:05:34.625605 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625629 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-23 01:05:34.625639 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-11-23 01:05:34.625648 | orchestrator | 2025-11-23 01:05:34.625664 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-23 01:05:34.625674 | orchestrator | Sunday 23 November 2025 00:58:28 +0000 (0:00:08.259) 0:01:54.963 ******* 2025-11-23 01:05:34.625684 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.625693 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625702 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625712 | orchestrator | 2025-11-23 01:05:34.625721 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-23 01:05:34.625731 | orchestrator | Sunday 23 November 2025 00:58:29 +0000 (0:00:00.742) 0:01:55.706 ******* 2025-11-23 01:05:34.625740 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-23 01:05:34.625749 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.625759 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-23 01:05:34.625768 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625778 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-23 01:05:34.625787 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625796 | orchestrator | 2025-11-23 01:05:34.625806 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-23 01:05:34.625816 | orchestrator | Sunday 23 November 2025 00:58:31 +0000 (0:00:01.526) 0:01:57.232 ******* 2025-11-23 01:05:34.625835 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625845 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625854 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.625863 | orchestrator | 2025-11-23 01:05:34.625873 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-11-23 01:05:34.625882 | orchestrator | Sunday 23 November 2025 00:58:31 +0000 (0:00:00.617) 0:01:57.849 ******* 2025-11-23 01:05:34.625892 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625901 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625910 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.625920 | orchestrator | 2025-11-23 01:05:34.625929 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-11-23 01:05:34.625938 | orchestrator | Sunday 23 November 2025 00:58:32 +0000 (0:00:01.052) 0:01:58.902 ******* 2025-11-23 01:05:34.625957 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.625967 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.625977 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.625986 | orchestrator | 2025-11-23 01:05:34.625995 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-11-23 01:05:34.626005 | orchestrator | Sunday 23 November 2025 00:58:34 +0000 (0:00:01.953) 0:02:00.855 ******* 2025-11-23 01:05:34.626060 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.626073 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.626083 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.626093 | orchestrator | 2025-11-23 01:05:34.626103 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-23 01:05:34.626112 | orchestrator | Sunday 23 November 2025 00:58:56 +0000 (0:00:21.869) 0:02:22.725 ******* 2025-11-23 01:05:34.626121 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.626131 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.626140 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.626150 | orchestrator | 2025-11-23 01:05:34.626159 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-23 01:05:34.626168 | orchestrator | Sunday 23 November 2025 00:59:11 +0000 (0:00:14.731) 0:02:37.456 ******* 2025-11-23 01:05:34.626178 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.626187 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.626197 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.626206 | orchestrator | 2025-11-23 01:05:34.626215 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-11-23 01:05:34.626225 | orchestrator | Sunday 23 November 2025 00:59:12 +0000 (0:00:00.895) 0:02:38.351 ******* 2025-11-23 01:05:34.626235 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.626244 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.626253 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.626285 | orchestrator | 2025-11-23 01:05:34.626295 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-11-23 01:05:34.626305 | orchestrator | Sunday 23 November 2025 00:59:25 +0000 (0:00:12.744) 0:02:51.095 ******* 2025-11-23 01:05:34.626315 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.626324 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.626334 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.626343 | orchestrator | 2025-11-23 01:05:34.626353 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-23 01:05:34.626362 | orchestrator | Sunday 23 November 2025 00:59:26 +0000 (0:00:01.071) 0:02:52.167 ******* 2025-11-23 01:05:34.626371 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.626381 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.626391 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.626400 | orchestrator | 2025-11-23 01:05:34.626409 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-11-23 01:05:34.626419 | orchestrator | 2025-11-23 01:05:34.626428 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-23 01:05:34.626449 | orchestrator | Sunday 23 November 2025 00:59:27 +0000 (0:00:00.907) 0:02:53.075 ******* 2025-11-23 01:05:34.626459 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:34.626469 | orchestrator | 2025-11-23 01:05:34.626488 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-11-23 01:05:34.626498 | orchestrator | Sunday 23 November 2025 00:59:27 +0000 (0:00:00.684) 0:02:53.760 ******* 2025-11-23 01:05:34.626507 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-11-23 01:05:34.626517 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-11-23 01:05:34.626526 | orchestrator | 2025-11-23 01:05:34.626536 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-11-23 01:05:34.626545 | orchestrator | Sunday 23 November 2025 00:59:30 +0000 (0:00:02.965) 0:02:56.726 ******* 2025-11-23 01:05:34.626555 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-11-23 01:05:34.626570 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-11-23 01:05:34.626580 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-11-23 01:05:34.626590 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-11-23 01:05:34.626600 | orchestrator | 2025-11-23 01:05:34.626609 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-11-23 01:05:34.626619 | orchestrator | Sunday 23 November 2025 00:59:36 +0000 (0:00:05.864) 0:03:02.590 ******* 2025-11-23 01:05:34.626628 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:05:34.626637 | orchestrator | 2025-11-23 01:05:34.626647 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-11-23 01:05:34.626656 | orchestrator | Sunday 23 November 2025 00:59:39 +0000 (0:00:03.374) 0:03:05.964 ******* 2025-11-23 01:05:34.626666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:05:34.626675 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-11-23 01:05:34.626685 | orchestrator | 2025-11-23 01:05:34.626694 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-11-23 01:05:34.626703 | orchestrator | Sunday 23 November 2025 00:59:43 +0000 (0:00:04.014) 0:03:09.979 ******* 2025-11-23 01:05:34.626713 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:05:34.626722 | orchestrator | 2025-11-23 01:05:34.626732 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-11-23 01:05:34.626741 | orchestrator | Sunday 23 November 2025 00:59:47 +0000 (0:00:03.677) 0:03:13.656 ******* 2025-11-23 01:05:34.626750 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-11-23 01:05:34.626760 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-11-23 01:05:34.626769 | orchestrator | 2025-11-23 01:05:34.626779 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-23 01:05:34.626788 | orchestrator | Sunday 23 November 2025 00:59:55 +0000 (0:00:07.524) 0:03:21.181 ******* 2025-11-23 01:05:34.626803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.626834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.626851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.626863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.626874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.626892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.626902 | orchestrator | 2025-11-23 01:05:34.626912 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-11-23 01:05:34.626922 | orchestrator | Sunday 23 November 2025 00:59:57 +0000 (0:00:01.993) 0:03:23.174 ******* 2025-11-23 01:05:34.626932 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.626941 | orchestrator | 2025-11-23 01:05:34.626951 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-11-23 01:05:34.626960 | orchestrator | Sunday 23 November 2025 00:59:57 +0000 (0:00:00.100) 0:03:23.274 ******* 2025-11-23 01:05:34.626970 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.626979 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.626989 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.626998 | orchestrator | 2025-11-23 01:05:34.627008 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-11-23 01:05:34.627018 | orchestrator | Sunday 23 November 2025 00:59:57 +0000 (0:00:00.241) 0:03:23.516 ******* 2025-11-23 01:05:34.627032 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 01:05:34.627041 | orchestrator | 2025-11-23 01:05:34.627051 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-11-23 01:05:34.627060 | orchestrator | Sunday 23 November 2025 00:59:58 +0000 (0:00:01.514) 0:03:25.030 ******* 2025-11-23 01:05:34.627070 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.627079 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.627089 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.627098 | orchestrator | 2025-11-23 01:05:34.627108 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-23 01:05:34.627117 | orchestrator | Sunday 23 November 2025 00:59:59 +0000 (0:00:00.642) 0:03:25.672 ******* 2025-11-23 01:05:34.627132 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:34.627142 | orchestrator | 2025-11-23 01:05:34.627151 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-23 01:05:34.627161 | orchestrator | Sunday 23 November 2025 01:00:00 +0000 (0:00:00.727) 0:03:26.400 ******* 2025-11-23 01:05:34.627172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627299 | orchestrator | 2025-11-23 01:05:34.627310 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-23 01:05:34.627355 | orchestrator | Sunday 23 November 2025 01:00:02 +0000 (0:00:02.574) 0:03:28.975 ******* 2025-11-23 01:05:34.627366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.627377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.627388 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.627422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.627433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.627451 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.627462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.627472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.627482 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.627492 | orchestrator | 2025-11-23 01:05:34.627502 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-23 01:05:34.627511 | orchestrator | Sunday 23 November 2025 01:00:04 +0000 (0:00:01.198) 0:03:30.173 ******* 2025-11-23 01:05:34.627534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.627546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.627563 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.627578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.627617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.627636 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.627670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.627683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.627703 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.627712 | orchestrator | 2025-11-23 01:05:34.627722 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-11-23 01:05:34.627731 | orchestrator | Sunday 23 November 2025 01:00:04 +0000 (0:00:00.873) 0:03:31.046 ******* 2025-11-23 01:05:34.627742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627824 | orchestrator | 2025-11-23 01:05:34.627834 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-11-23 01:05:34.627843 | orchestrator | Sunday 23 November 2025 01:00:07 +0000 (0:00:02.607) 0:03:33.654 ******* 2025-11-23 01:05:34.627859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.627902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.627939 | orchestrator | 2025-11-23 01:05:34.627949 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-11-23 01:05:34.627958 | orchestrator | Sunday 23 November 2025 01:00:15 +0000 (0:00:07.793) 0:03:41.448 ******* 2025-11-23 01:05:34.627972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.627989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.627999 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.628009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.628020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.628031 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.628055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-23 01:05:34.628078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.628090 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.628100 | orchestrator | 2025-11-23 01:05:34.628111 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-11-23 01:05:34.628122 | orchestrator | Sunday 23 November 2025 01:00:16 +0000 (0:00:00.747) 0:03:42.196 ******* 2025-11-23 01:05:34.628133 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.628144 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.628154 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.628165 | orchestrator | 2025-11-23 01:05:34.628175 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-11-23 01:05:34.628186 | orchestrator | Sunday 23 November 2025 01:00:18 +0000 (0:00:02.200) 0:03:44.396 ******* 2025-11-23 01:05:34.628197 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.628207 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.628218 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.628228 | orchestrator | 2025-11-23 01:05:34.628239 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-11-23 01:05:34.628249 | orchestrator | Sunday 23 November 2025 01:00:18 +0000 (0:00:00.524) 0:03:44.920 ******* 2025-11-23 01:05:34.628279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.628306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.628325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-23 01:05:34.628338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.628349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.628361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.628379 | orchestrator | 2025-11-23 01:05:34.628390 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-23 01:05:34.628400 | orchestrator | Sunday 23 November 2025 01:00:21 +0000 (0:00:02.512) 0:03:47.433 ******* 2025-11-23 01:05:34.628411 | orchestrator | 2025-11-23 01:05:34.628422 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-23 01:05:34.628438 | orchestrator | Sunday 23 November 2025 01:00:21 +0000 (0:00:00.129) 0:03:47.562 ******* 2025-11-23 01:05:34.628449 | orchestrator | 2025-11-23 01:05:34.628460 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-23 01:05:34.628471 | orchestrator | Sunday 23 November 2025 01:00:21 +0000 (0:00:00.117) 0:03:47.680 ******* 2025-11-23 01:05:34.628481 | orchestrator | 2025-11-23 01:05:34.628492 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-11-23 01:05:34.628502 | orchestrator | Sunday 23 November 2025 01:00:21 +0000 (0:00:00.132) 0:03:47.812 ******* 2025-11-23 01:05:34.628513 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.628524 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.628534 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.628545 | orchestrator | 2025-11-23 01:05:34.628555 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-11-23 01:05:34.628571 | orchestrator | Sunday 23 November 2025 01:00:40 +0000 (0:00:18.485) 0:04:06.298 ******* 2025-11-23 01:05:34.628582 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.628592 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.628603 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.628613 | orchestrator | 2025-11-23 01:05:34.628624 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-11-23 01:05:34.628635 | orchestrator | 2025-11-23 01:05:34.628645 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-23 01:05:34.628656 | orchestrator | Sunday 23 November 2025 01:00:53 +0000 (0:00:13.215) 0:04:19.514 ******* 2025-11-23 01:05:34.628667 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:34.628678 | orchestrator | 2025-11-23 01:05:34.628689 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-23 01:05:34.628699 | orchestrator | Sunday 23 November 2025 01:00:54 +0000 (0:00:01.233) 0:04:20.747 ******* 2025-11-23 01:05:34.628710 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.628720 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.628731 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.628741 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.628752 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.628762 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.628773 | orchestrator | 2025-11-23 01:05:34.628783 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-11-23 01:05:34.628794 | orchestrator | Sunday 23 November 2025 01:00:55 +0000 (0:00:00.526) 0:04:21.273 ******* 2025-11-23 01:05:34.628805 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.628815 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.628826 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.628837 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 01:05:34.628847 | orchestrator | 2025-11-23 01:05:34.628858 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-23 01:05:34.628883 | orchestrator | Sunday 23 November 2025 01:00:56 +0000 (0:00:00.854) 0:04:22.127 ******* 2025-11-23 01:05:34.628894 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-11-23 01:05:34.628905 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-11-23 01:05:34.628915 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-11-23 01:05:34.628934 | orchestrator | 2025-11-23 01:05:34.628945 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-23 01:05:34.628956 | orchestrator | Sunday 23 November 2025 01:00:56 +0000 (0:00:00.712) 0:04:22.840 ******* 2025-11-23 01:05:34.628967 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-11-23 01:05:34.628977 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-11-23 01:05:34.628988 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-11-23 01:05:34.628998 | orchestrator | 2025-11-23 01:05:34.629009 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-23 01:05:34.629020 | orchestrator | Sunday 23 November 2025 01:00:58 +0000 (0:00:01.300) 0:04:24.141 ******* 2025-11-23 01:05:34.629030 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-11-23 01:05:34.629041 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.629051 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-11-23 01:05:34.629062 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.629072 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-11-23 01:05:34.629083 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.629094 | orchestrator | 2025-11-23 01:05:34.629104 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-11-23 01:05:34.629115 | orchestrator | Sunday 23 November 2025 01:00:58 +0000 (0:00:00.498) 0:04:24.639 ******* 2025-11-23 01:05:34.629126 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 01:05:34.629137 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 01:05:34.629147 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.629158 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 01:05:34.629178 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 01:05:34.629190 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.629200 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-23 01:05:34.629211 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-23 01:05:34.629221 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.629232 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-23 01:05:34.629243 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-23 01:05:34.629253 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-23 01:05:34.629295 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-23 01:05:34.629307 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-23 01:05:34.629317 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-23 01:05:34.629328 | orchestrator | 2025-11-23 01:05:34.629339 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-11-23 01:05:34.629350 | orchestrator | Sunday 23 November 2025 01:01:00 +0000 (0:00:02.173) 0:04:26.813 ******* 2025-11-23 01:05:34.629360 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.629371 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.629381 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.629392 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.629403 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.629418 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.629430 | orchestrator | 2025-11-23 01:05:34.629440 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-11-23 01:05:34.629451 | orchestrator | Sunday 23 November 2025 01:01:01 +0000 (0:00:01.177) 0:04:27.990 ******* 2025-11-23 01:05:34.629462 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.629472 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.629492 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.629502 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.629513 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.629523 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.629534 | orchestrator | 2025-11-23 01:05:34.629545 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-23 01:05:34.629556 | orchestrator | Sunday 23 November 2025 01:01:03 +0000 (0:00:01.588) 0:04:29.578 ******* 2025-11-23 01:05:34.629568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629780 | orchestrator | 2025-11-23 01:05:34.629791 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-23 01:05:34.629802 | orchestrator | Sunday 23 November 2025 01:01:05 +0000 (0:00:02.335) 0:04:31.914 ******* 2025-11-23 01:05:34.629813 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:05:34.629825 | orchestrator | 2025-11-23 01:05:34.629836 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-23 01:05:34.629847 | orchestrator | Sunday 23 November 2025 01:01:06 +0000 (0:00:01.059) 0:04:32.973 ******* 2025-11-23 01:05:34.629866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.629994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.630005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.630062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.630076 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.630096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.630120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.630132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.630144 | orchestrator | 2025-11-23 01:05:34.630155 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-23 01:05:34.630166 | orchestrator | Sunday 23 November 2025 01:01:10 +0000 (0:00:03.731) 0:04:36.704 ******* 2025-11-23 01:05:34.630177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.630189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.630200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630223 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.630241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.630253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.630326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.630339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.630370 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.630398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630410 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.630427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.630440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630450 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.630462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.630473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630484 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.630495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.630518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630529 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.630538 | orchestrator | 2025-11-23 01:05:34.630548 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-23 01:05:34.630558 | orchestrator | Sunday 23 November 2025 01:01:12 +0000 (0:00:01.533) 0:04:38.237 ******* 2025-11-23 01:05:34.630572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.630583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.630594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630604 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.630614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.630629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.630650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630661 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.630671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.630681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.630691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630708 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.630718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.630733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630743 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.630757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.630767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630777 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.630787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.630797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.630814 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.630823 | orchestrator | 2025-11-23 01:05:34.630833 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-23 01:05:34.630843 | orchestrator | Sunday 23 November 2025 01:01:13 +0000 (0:00:01.709) 0:04:39.947 ******* 2025-11-23 01:05:34.630852 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.630862 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.630872 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.630881 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-23 01:05:34.630891 | orchestrator | 2025-11-23 01:05:34.630900 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-11-23 01:05:34.630910 | orchestrator | Sunday 23 November 2025 01:01:14 +0000 (0:00:01.011) 0:04:40.959 ******* 2025-11-23 01:05:34.630919 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-23 01:05:34.630929 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-23 01:05:34.630938 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-23 01:05:34.630947 | orchestrator | 2025-11-23 01:05:34.630957 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-11-23 01:05:34.630966 | orchestrator | Sunday 23 November 2025 01:01:15 +0000 (0:00:01.072) 0:04:42.031 ******* 2025-11-23 01:05:34.630976 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-23 01:05:34.630985 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-23 01:05:34.630995 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-23 01:05:34.631004 | orchestrator | 2025-11-23 01:05:34.631013 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-11-23 01:05:34.631023 | orchestrator | Sunday 23 November 2025 01:01:16 +0000 (0:00:00.847) 0:04:42.879 ******* 2025-11-23 01:05:34.631032 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:05:34.631042 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:05:34.631051 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:05:34.631061 | orchestrator | 2025-11-23 01:05:34.631070 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-11-23 01:05:34.631080 | orchestrator | Sunday 23 November 2025 01:01:17 +0000 (0:00:00.445) 0:04:43.325 ******* 2025-11-23 01:05:34.631089 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:05:34.631099 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:05:34.631108 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:05:34.631117 | orchestrator | 2025-11-23 01:05:34.631132 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-11-23 01:05:34.631142 | orchestrator | Sunday 23 November 2025 01:01:17 +0000 (0:00:00.628) 0:04:43.954 ******* 2025-11-23 01:05:34.631151 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-23 01:05:34.631161 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-23 01:05:34.631170 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-23 01:05:34.631180 | orchestrator | 2025-11-23 01:05:34.631189 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-11-23 01:05:34.631199 | orchestrator | Sunday 23 November 2025 01:01:19 +0000 (0:00:01.162) 0:04:45.116 ******* 2025-11-23 01:05:34.631208 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-23 01:05:34.631222 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-23 01:05:34.631232 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-23 01:05:34.631241 | orchestrator | 2025-11-23 01:05:34.631251 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-11-23 01:05:34.631260 | orchestrator | Sunday 23 November 2025 01:01:20 +0000 (0:00:01.187) 0:04:46.304 ******* 2025-11-23 01:05:34.631287 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-23 01:05:34.631297 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-23 01:05:34.631316 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-23 01:05:34.631325 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-11-23 01:05:34.631334 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-11-23 01:05:34.631344 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-11-23 01:05:34.631354 | orchestrator | 2025-11-23 01:05:34.631363 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-11-23 01:05:34.631372 | orchestrator | Sunday 23 November 2025 01:01:24 +0000 (0:00:04.016) 0:04:50.321 ******* 2025-11-23 01:05:34.631382 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.631391 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.631401 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.631410 | orchestrator | 2025-11-23 01:05:34.631419 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-11-23 01:05:34.631429 | orchestrator | Sunday 23 November 2025 01:01:24 +0000 (0:00:00.419) 0:04:50.740 ******* 2025-11-23 01:05:34.631438 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.631448 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.631457 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.631467 | orchestrator | 2025-11-23 01:05:34.631476 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-11-23 01:05:34.631485 | orchestrator | Sunday 23 November 2025 01:01:24 +0000 (0:00:00.299) 0:04:51.040 ******* 2025-11-23 01:05:34.631495 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.631504 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.631514 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.631523 | orchestrator | 2025-11-23 01:05:34.631532 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-11-23 01:05:34.631542 | orchestrator | Sunday 23 November 2025 01:01:26 +0000 (0:00:01.220) 0:04:52.261 ******* 2025-11-23 01:05:34.631552 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-23 01:05:34.631562 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-23 01:05:34.631571 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-23 01:05:34.631581 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-23 01:05:34.631591 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-23 01:05:34.631600 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-23 01:05:34.631609 | orchestrator | 2025-11-23 01:05:34.631619 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-11-23 01:05:34.631628 | orchestrator | Sunday 23 November 2025 01:01:30 +0000 (0:00:03.816) 0:04:56.078 ******* 2025-11-23 01:05:34.631638 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-23 01:05:34.631647 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-23 01:05:34.631657 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-23 01:05:34.631666 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-23 01:05:34.631676 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.631686 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-23 01:05:34.631695 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.631705 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-23 01:05:34.631714 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.631723 | orchestrator | 2025-11-23 01:05:34.631733 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-11-23 01:05:34.631748 | orchestrator | Sunday 23 November 2025 01:01:33 +0000 (0:00:03.392) 0:04:59.470 ******* 2025-11-23 01:05:34.631757 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.631767 | orchestrator | 2025-11-23 01:05:34.631776 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-11-23 01:05:34.631786 | orchestrator | Sunday 23 November 2025 01:01:33 +0000 (0:00:00.100) 0:04:59.571 ******* 2025-11-23 01:05:34.631796 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.631805 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.631815 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.631830 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.631840 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.631849 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.631858 | orchestrator | 2025-11-23 01:05:34.631868 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-11-23 01:05:34.631877 | orchestrator | Sunday 23 November 2025 01:01:34 +0000 (0:00:00.524) 0:05:00.096 ******* 2025-11-23 01:05:34.631887 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-23 01:05:34.631896 | orchestrator | 2025-11-23 01:05:34.631906 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-11-23 01:05:34.631915 | orchestrator | Sunday 23 November 2025 01:01:34 +0000 (0:00:00.625) 0:05:00.721 ******* 2025-11-23 01:05:34.631924 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.631934 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.631948 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.631958 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.631967 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.631976 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.631986 | orchestrator | 2025-11-23 01:05:34.631996 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-11-23 01:05:34.632005 | orchestrator | Sunday 23 November 2025 01:01:35 +0000 (0:00:00.669) 0:05:01.390 ******* 2025-11-23 01:05:34.632015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632210 | orchestrator | 2025-11-23 01:05:34.632220 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-11-23 01:05:34.632230 | orchestrator | Sunday 23 November 2025 01:01:39 +0000 (0:00:04.268) 0:05:05.658 ******* 2025-11-23 01:05:34.632240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.632255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.632284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.632296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.632306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.632321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.632337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.632515 | orchestrator | 2025-11-23 01:05:34.632525 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-11-23 01:05:34.632534 | orchestrator | Sunday 23 November 2025 01:01:47 +0000 (0:00:08.200) 0:05:13.859 ******* 2025-11-23 01:05:34.632544 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.632553 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.632563 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.632572 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.632581 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.632591 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.632600 | orchestrator | 2025-11-23 01:05:34.632609 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-11-23 01:05:34.632619 | orchestrator | Sunday 23 November 2025 01:01:49 +0000 (0:00:01.843) 0:05:15.703 ******* 2025-11-23 01:05:34.632635 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-23 01:05:34.632645 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-23 01:05:34.632654 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-23 01:05:34.632664 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-23 01:05:34.632673 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-23 01:05:34.632683 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.632693 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-23 01:05:34.632702 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.632711 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-23 01:05:34.632721 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-23 01:05:34.632730 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.632739 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-23 01:05:34.632749 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-23 01:05:34.632759 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-23 01:05:34.632768 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-23 01:05:34.632778 | orchestrator | 2025-11-23 01:05:34.632787 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-11-23 01:05:34.632796 | orchestrator | Sunday 23 November 2025 01:01:53 +0000 (0:00:04.112) 0:05:19.815 ******* 2025-11-23 01:05:34.632806 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.632815 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.632824 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.632834 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.632843 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.632852 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.632861 | orchestrator | 2025-11-23 01:05:34.632871 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-11-23 01:05:34.632881 | orchestrator | Sunday 23 November 2025 01:01:54 +0000 (0:00:00.533) 0:05:20.348 ******* 2025-11-23 01:05:34.632890 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-23 01:05:34.632900 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-23 01:05:34.632909 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-23 01:05:34.632919 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-23 01:05:34.632928 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-23 01:05:34.632943 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-23 01:05:34.632953 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-23 01:05:34.632962 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-23 01:05:34.632972 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-23 01:05:34.632986 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-23 01:05:34.633004 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.633014 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-23 01:05:34.633024 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.633033 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-23 01:05:34.633042 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.633052 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-23 01:05:34.633061 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-23 01:05:34.633071 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-23 01:05:34.633080 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-23 01:05:34.633089 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-23 01:05:34.633099 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-23 01:05:34.633108 | orchestrator | 2025-11-23 01:05:34.633117 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-11-23 01:05:34.633127 | orchestrator | Sunday 23 November 2025 01:01:59 +0000 (0:00:04.811) 0:05:25.160 ******* 2025-11-23 01:05:34.633136 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-23 01:05:34.633146 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-23 01:05:34.633155 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-23 01:05:34.633164 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-23 01:05:34.633174 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-23 01:05:34.633183 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-23 01:05:34.633192 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-23 01:05:34.633202 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-23 01:05:34.633211 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-23 01:05:34.633220 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-23 01:05:34.633230 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-23 01:05:34.633239 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-23 01:05:34.633248 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-23 01:05:34.633258 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.633292 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-23 01:05:34.633310 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.633327 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-23 01:05:34.633343 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.633358 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-23 01:05:34.633368 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-23 01:05:34.633378 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-23 01:05:34.633396 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-23 01:05:34.633405 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-23 01:05:34.633415 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-23 01:05:34.633424 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-23 01:05:34.633434 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-23 01:05:34.633449 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-23 01:05:34.633459 | orchestrator | 2025-11-23 01:05:34.633468 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-11-23 01:05:34.633478 | orchestrator | Sunday 23 November 2025 01:02:05 +0000 (0:00:06.911) 0:05:32.072 ******* 2025-11-23 01:05:34.633487 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.633496 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.633506 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.633515 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.633525 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.633534 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.633544 | orchestrator | 2025-11-23 01:05:34.633553 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-11-23 01:05:34.633567 | orchestrator | Sunday 23 November 2025 01:02:06 +0000 (0:00:00.637) 0:05:32.709 ******* 2025-11-23 01:05:34.633577 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.633587 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.633596 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.633605 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.633615 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.633624 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.633634 | orchestrator | 2025-11-23 01:05:34.633643 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-11-23 01:05:34.633653 | orchestrator | Sunday 23 November 2025 01:02:07 +0000 (0:00:00.535) 0:05:33.244 ******* 2025-11-23 01:05:34.633662 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.633671 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.633681 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.633690 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.633700 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.633709 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.633719 | orchestrator | 2025-11-23 01:05:34.633728 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-11-23 01:05:34.633738 | orchestrator | Sunday 23 November 2025 01:02:09 +0000 (0:00:02.643) 0:05:35.888 ******* 2025-11-23 01:05:34.633748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.633758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.633776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.633787 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.633803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.633818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.633828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.633838 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.633848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.633865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.633875 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.633885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.633901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-23 01:05:34.633917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.633927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-23 01:05:34.633937 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.633947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.633965 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.633975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-23 01:05:34.633986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-23 01:05:34.633995 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.634005 | orchestrator | 2025-11-23 01:05:34.634127 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-11-23 01:05:34.634142 | orchestrator | Sunday 23 November 2025 01:02:11 +0000 (0:00:01.986) 0:05:37.874 ******* 2025-11-23 01:05:34.634152 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-23 01:05:34.634162 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-23 01:05:34.634171 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.634181 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-23 01:05:34.634197 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-23 01:05:34.634207 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-23 01:05:34.634217 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-23 01:05:34.634226 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.634236 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-23 01:05:34.634245 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-23 01:05:34.634255 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.634299 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-23 01:05:34.634311 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-23 01:05:34.634327 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.634337 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.634347 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-23 01:05:34.634356 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-23 01:05:34.634366 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.634375 | orchestrator | 2025-11-23 01:05:34.634385 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-11-23 01:05:34.634395 | orchestrator | Sunday 23 November 2025 01:02:12 +0000 (0:00:00.832) 0:05:38.707 ******* 2025-11-23 01:05:34.634405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-23 01:05:34.634601 | orchestrator | 2025-11-23 01:05:34.634611 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-23 01:05:34.634621 | orchestrator | Sunday 23 November 2025 01:02:15 +0000 (0:00:03.139) 0:05:41.847 ******* 2025-11-23 01:05:34.634631 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.634640 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.634650 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.634660 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.634669 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.634678 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.634688 | orchestrator | 2025-11-23 01:05:34.634697 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-23 01:05:34.634707 | orchestrator | Sunday 23 November 2025 01:02:16 +0000 (0:00:00.649) 0:05:42.496 ******* 2025-11-23 01:05:34.634716 | orchestrator | 2025-11-23 01:05:34.634726 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-23 01:05:34.634736 | orchestrator | Sunday 23 November 2025 01:02:16 +0000 (0:00:00.124) 0:05:42.621 ******* 2025-11-23 01:05:34.634745 | orchestrator | 2025-11-23 01:05:34.634755 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-23 01:05:34.634764 | orchestrator | Sunday 23 November 2025 01:02:16 +0000 (0:00:00.122) 0:05:42.743 ******* 2025-11-23 01:05:34.634774 | orchestrator | 2025-11-23 01:05:34.634783 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-23 01:05:34.634793 | orchestrator | Sunday 23 November 2025 01:02:16 +0000 (0:00:00.126) 0:05:42.870 ******* 2025-11-23 01:05:34.634802 | orchestrator | 2025-11-23 01:05:34.634816 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-23 01:05:34.634826 | orchestrator | Sunday 23 November 2025 01:02:16 +0000 (0:00:00.129) 0:05:42.999 ******* 2025-11-23 01:05:34.634842 | orchestrator | 2025-11-23 01:05:34.634851 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-23 01:05:34.634861 | orchestrator | Sunday 23 November 2025 01:02:17 +0000 (0:00:00.126) 0:05:43.125 ******* 2025-11-23 01:05:34.634871 | orchestrator | 2025-11-23 01:05:34.634880 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-11-23 01:05:34.634890 | orchestrator | Sunday 23 November 2025 01:02:17 +0000 (0:00:00.246) 0:05:43.372 ******* 2025-11-23 01:05:34.634899 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.634908 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.634918 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.634928 | orchestrator | 2025-11-23 01:05:34.634945 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-11-23 01:05:34.634955 | orchestrator | Sunday 23 November 2025 01:02:29 +0000 (0:00:12.595) 0:05:55.968 ******* 2025-11-23 01:05:34.634965 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.634974 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.634984 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.634993 | orchestrator | 2025-11-23 01:05:34.635003 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-11-23 01:05:34.635012 | orchestrator | Sunday 23 November 2025 01:02:49 +0000 (0:00:19.319) 0:06:15.287 ******* 2025-11-23 01:05:34.635022 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.635031 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.635041 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.635050 | orchestrator | 2025-11-23 01:05:34.635060 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-11-23 01:05:34.635069 | orchestrator | Sunday 23 November 2025 01:03:07 +0000 (0:00:18.018) 0:06:33.305 ******* 2025-11-23 01:05:34.635079 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.635088 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.635097 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.635107 | orchestrator | 2025-11-23 01:05:34.635117 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-11-23 01:05:34.635126 | orchestrator | Sunday 23 November 2025 01:03:45 +0000 (0:00:38.158) 0:07:11.464 ******* 2025-11-23 01:05:34.635136 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-11-23 01:05:34.635146 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.635155 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-11-23 01:05:34.635165 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.635174 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.635184 | orchestrator | 2025-11-23 01:05:34.635193 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-11-23 01:05:34.635203 | orchestrator | Sunday 23 November 2025 01:03:51 +0000 (0:00:06.246) 0:07:17.710 ******* 2025-11-23 01:05:34.635213 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.635222 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.635232 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.635241 | orchestrator | 2025-11-23 01:05:34.635250 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-11-23 01:05:34.635283 | orchestrator | Sunday 23 November 2025 01:03:52 +0000 (0:00:00.804) 0:07:18.515 ******* 2025-11-23 01:05:34.635296 | orchestrator | changed: [testbed-node-5] 2025-11-23 01:05:34.635306 | orchestrator | changed: [testbed-node-3] 2025-11-23 01:05:34.635316 | orchestrator | changed: [testbed-node-4] 2025-11-23 01:05:34.635326 | orchestrator | 2025-11-23 01:05:34.635336 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-11-23 01:05:34.635345 | orchestrator | Sunday 23 November 2025 01:04:22 +0000 (0:00:29.727) 0:07:48.242 ******* 2025-11-23 01:05:34.635355 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.635365 | orchestrator | 2025-11-23 01:05:34.635374 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-11-23 01:05:34.635390 | orchestrator | Sunday 23 November 2025 01:04:22 +0000 (0:00:00.109) 0:07:48.352 ******* 2025-11-23 01:05:34.635400 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.635409 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.635419 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.635429 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.635438 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.635448 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-11-23 01:05:34.635458 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-11-23 01:05:34.635468 | orchestrator | 2025-11-23 01:05:34.635478 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-11-23 01:05:34.635487 | orchestrator | Sunday 23 November 2025 01:04:44 +0000 (0:00:22.233) 0:08:10.585 ******* 2025-11-23 01:05:34.635497 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.635507 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.635517 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.635526 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.635536 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.635545 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.635555 | orchestrator | 2025-11-23 01:05:34.635565 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-11-23 01:05:34.635575 | orchestrator | Sunday 23 November 2025 01:04:52 +0000 (0:00:08.149) 0:08:18.734 ******* 2025-11-23 01:05:34.635584 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.635594 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.635603 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.635613 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.635622 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.635632 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-11-23 01:05:34.635642 | orchestrator | 2025-11-23 01:05:34.635652 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-23 01:05:34.635666 | orchestrator | Sunday 23 November 2025 01:04:57 +0000 (0:00:04.378) 0:08:23.113 ******* 2025-11-23 01:05:34.635676 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-11-23 01:05:34.635686 | orchestrator | 2025-11-23 01:05:34.635696 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-23 01:05:34.635705 | orchestrator | Sunday 23 November 2025 01:05:10 +0000 (0:00:13.860) 0:08:36.973 ******* 2025-11-23 01:05:34.635715 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-11-23 01:05:34.635724 | orchestrator | 2025-11-23 01:05:34.635733 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-11-23 01:05:34.635743 | orchestrator | Sunday 23 November 2025 01:05:12 +0000 (0:00:01.293) 0:08:38.267 ******* 2025-11-23 01:05:34.635757 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.635767 | orchestrator | 2025-11-23 01:05:34.635777 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-11-23 01:05:34.635786 | orchestrator | Sunday 23 November 2025 01:05:13 +0000 (0:00:01.193) 0:08:39.460 ******* 2025-11-23 01:05:34.635796 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-11-23 01:05:34.635805 | orchestrator | 2025-11-23 01:05:34.635815 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-11-23 01:05:34.635825 | orchestrator | Sunday 23 November 2025 01:05:25 +0000 (0:00:12.182) 0:08:51.643 ******* 2025-11-23 01:05:34.635834 | orchestrator | ok: [testbed-node-3] 2025-11-23 01:05:34.635844 | orchestrator | ok: [testbed-node-4] 2025-11-23 01:05:34.635853 | orchestrator | ok: [testbed-node-5] 2025-11-23 01:05:34.635863 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:05:34.635872 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:05:34.635882 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:05:34.635891 | orchestrator | 2025-11-23 01:05:34.635906 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-11-23 01:05:34.635916 | orchestrator | 2025-11-23 01:05:34.635925 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-11-23 01:05:34.635935 | orchestrator | Sunday 23 November 2025 01:05:27 +0000 (0:00:01.569) 0:08:53.213 ******* 2025-11-23 01:05:34.635944 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:05:34.635954 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:05:34.635963 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:05:34.635973 | orchestrator | 2025-11-23 01:05:34.635983 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-11-23 01:05:34.635992 | orchestrator | 2025-11-23 01:05:34.636002 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-11-23 01:05:34.636011 | orchestrator | Sunday 23 November 2025 01:05:28 +0000 (0:00:01.013) 0:08:54.227 ******* 2025-11-23 01:05:34.636021 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.636030 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.636040 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.636049 | orchestrator | 2025-11-23 01:05:34.636059 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-11-23 01:05:34.636068 | orchestrator | 2025-11-23 01:05:34.636078 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-11-23 01:05:34.636087 | orchestrator | Sunday 23 November 2025 01:05:28 +0000 (0:00:00.444) 0:08:54.672 ******* 2025-11-23 01:05:34.636096 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-11-23 01:05:34.636106 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-23 01:05:34.636116 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-23 01:05:34.636126 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-11-23 01:05:34.636135 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-11-23 01:05:34.636145 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-11-23 01:05:34.636154 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-11-23 01:05:34.636164 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-23 01:05:34.636174 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-23 01:05:34.636183 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-11-23 01:05:34.636193 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-11-23 01:05:34.636202 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-11-23 01:05:34.636212 | orchestrator | skipping: [testbed-node-3] 2025-11-23 01:05:34.636221 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-11-23 01:05:34.636231 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-23 01:05:34.636240 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-23 01:05:34.636249 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-11-23 01:05:34.636259 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-11-23 01:05:34.636293 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-11-23 01:05:34.636304 | orchestrator | skipping: [testbed-node-4] 2025-11-23 01:05:34.636313 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-11-23 01:05:34.636323 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-23 01:05:34.636332 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-23 01:05:34.636342 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-11-23 01:05:34.636351 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-11-23 01:05:34.636361 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-11-23 01:05:34.636370 | orchestrator | skipping: [testbed-node-5] 2025-11-23 01:05:34.636380 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-11-23 01:05:34.636396 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-23 01:05:34.636405 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-23 01:05:34.636415 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-11-23 01:05:34.636429 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-11-23 01:05:34.636439 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-11-23 01:05:34.636449 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.636459 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.636468 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-11-23 01:05:34.636477 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-23 01:05:34.636487 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-23 01:05:34.636496 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-11-23 01:05:34.636506 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-11-23 01:05:34.636520 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-11-23 01:05:34.636530 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.636540 | orchestrator | 2025-11-23 01:05:34.636549 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-11-23 01:05:34.636559 | orchestrator | 2025-11-23 01:05:34.636569 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-11-23 01:05:34.636578 | orchestrator | Sunday 23 November 2025 01:05:29 +0000 (0:00:01.156) 0:08:55.828 ******* 2025-11-23 01:05:34.636588 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-11-23 01:05:34.636598 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-11-23 01:05:34.636608 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.636617 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-11-23 01:05:34.636627 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-11-23 01:05:34.636636 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.636645 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-11-23 01:05:34.636655 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-11-23 01:05:34.636664 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.636674 | orchestrator | 2025-11-23 01:05:34.636684 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-11-23 01:05:34.636693 | orchestrator | 2025-11-23 01:05:34.636703 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-11-23 01:05:34.636712 | orchestrator | Sunday 23 November 2025 01:05:30 +0000 (0:00:00.575) 0:08:56.404 ******* 2025-11-23 01:05:34.636722 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.636731 | orchestrator | 2025-11-23 01:05:34.636741 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-11-23 01:05:34.636750 | orchestrator | 2025-11-23 01:05:34.636760 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-11-23 01:05:34.636769 | orchestrator | Sunday 23 November 2025 01:05:30 +0000 (0:00:00.581) 0:08:56.986 ******* 2025-11-23 01:05:34.636779 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:05:34.636788 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:05:34.636798 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:05:34.636808 | orchestrator | 2025-11-23 01:05:34.636817 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:05:34.636827 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 01:05:34.636837 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-11-23 01:05:34.636847 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-23 01:05:34.636863 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-23 01:05:34.636873 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-23 01:05:34.636882 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-11-23 01:05:34.636892 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-11-23 01:05:34.636902 | orchestrator | 2025-11-23 01:05:34.636911 | orchestrator | 2025-11-23 01:05:34.636921 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:05:34.636931 | orchestrator | Sunday 23 November 2025 01:05:31 +0000 (0:00:00.425) 0:08:57.412 ******* 2025-11-23 01:05:34.636940 | orchestrator | =============================================================================== 2025-11-23 01:05:34.636950 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.16s 2025-11-23 01:05:34.636959 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.76s 2025-11-23 01:05:34.636969 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.73s 2025-11-23 01:05:34.636979 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.23s 2025-11-23 01:05:34.636988 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.87s 2025-11-23 01:05:34.636998 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.44s 2025-11-23 01:05:34.637008 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.32s 2025-11-23 01:05:34.637017 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.49s 2025-11-23 01:05:34.637031 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.02s 2025-11-23 01:05:34.637041 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.73s 2025-11-23 01:05:34.637051 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.73s 2025-11-23 01:05:34.637060 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.08s 2025-11-23 01:05:34.637070 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.86s 2025-11-23 01:05:34.637079 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.22s 2025-11-23 01:05:34.637089 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.74s 2025-11-23 01:05:34.637103 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.60s 2025-11-23 01:05:34.637112 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.18s 2025-11-23 01:05:34.637122 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.26s 2025-11-23 01:05:34.637131 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.20s 2025-11-23 01:05:34.637141 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.15s 2025-11-23 01:05:34.637150 | orchestrator | 2025-11-23 01:05:34 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:34.637160 | orchestrator | 2025-11-23 01:05:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:34.637170 | orchestrator | 2025-11-23 01:05:34 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:34.637179 | orchestrator | 2025-11-23 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:37.669031 | orchestrator | 2025-11-23 01:05:37 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:37.670777 | orchestrator | 2025-11-23 01:05:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:37.673333 | orchestrator | 2025-11-23 01:05:37 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:37.673378 | orchestrator | 2025-11-23 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:40.716158 | orchestrator | 2025-11-23 01:05:40 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:40.717189 | orchestrator | 2025-11-23 01:05:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:40.718751 | orchestrator | 2025-11-23 01:05:40 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:40.719044 | orchestrator | 2025-11-23 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:43.766616 | orchestrator | 2025-11-23 01:05:43 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:43.767740 | orchestrator | 2025-11-23 01:05:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:43.769061 | orchestrator | 2025-11-23 01:05:43 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:43.769102 | orchestrator | 2025-11-23 01:05:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:46.808407 | orchestrator | 2025-11-23 01:05:46 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:46.810206 | orchestrator | 2025-11-23 01:05:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:46.812330 | orchestrator | 2025-11-23 01:05:46 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:46.812648 | orchestrator | 2025-11-23 01:05:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:49.853998 | orchestrator | 2025-11-23 01:05:49 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:49.855630 | orchestrator | 2025-11-23 01:05:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:49.857476 | orchestrator | 2025-11-23 01:05:49 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:49.857515 | orchestrator | 2025-11-23 01:05:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:52.895144 | orchestrator | 2025-11-23 01:05:52 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:52.897085 | orchestrator | 2025-11-23 01:05:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:52.899058 | orchestrator | 2025-11-23 01:05:52 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:52.899096 | orchestrator | 2025-11-23 01:05:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:55.941038 | orchestrator | 2025-11-23 01:05:55 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:55.942543 | orchestrator | 2025-11-23 01:05:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:55.943765 | orchestrator | 2025-11-23 01:05:55 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:55.943793 | orchestrator | 2025-11-23 01:05:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:05:58.987451 | orchestrator | 2025-11-23 01:05:58 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:05:58.989173 | orchestrator | 2025-11-23 01:05:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:05:58.991643 | orchestrator | 2025-11-23 01:05:58 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:05:58.991680 | orchestrator | 2025-11-23 01:05:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:02.040014 | orchestrator | 2025-11-23 01:06:02 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:06:02.041984 | orchestrator | 2025-11-23 01:06:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:02.044729 | orchestrator | 2025-11-23 01:06:02 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:02.044814 | orchestrator | 2025-11-23 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:05.083435 | orchestrator | 2025-11-23 01:06:05 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:06:05.084467 | orchestrator | 2025-11-23 01:06:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:05.086321 | orchestrator | 2025-11-23 01:06:05 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:05.086452 | orchestrator | 2025-11-23 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:08.122625 | orchestrator | 2025-11-23 01:06:08 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:06:08.123695 | orchestrator | 2025-11-23 01:06:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:08.125120 | orchestrator | 2025-11-23 01:06:08 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:08.125167 | orchestrator | 2025-11-23 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:11.165868 | orchestrator | 2025-11-23 01:06:11 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:06:11.167444 | orchestrator | 2025-11-23 01:06:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:11.170217 | orchestrator | 2025-11-23 01:06:11 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:11.170245 | orchestrator | 2025-11-23 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:14.211895 | orchestrator | 2025-11-23 01:06:14 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:06:14.213395 | orchestrator | 2025-11-23 01:06:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:14.215370 | orchestrator | 2025-11-23 01:06:14 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:14.215404 | orchestrator | 2025-11-23 01:06:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:17.259310 | orchestrator | 2025-11-23 01:06:17 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state STARTED 2025-11-23 01:06:17.262848 | orchestrator | 2025-11-23 01:06:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:17.264111 | orchestrator | 2025-11-23 01:06:17 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:17.264586 | orchestrator | 2025-11-23 01:06:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:20.303152 | orchestrator | 2025-11-23 01:06:20.303239 | orchestrator | 2025-11-23 01:06:20 | INFO  | Task cb8e366d-bb61-429e-9be5-dede69666a98 is in state SUCCESS 2025-11-23 01:06:20.304413 | orchestrator | 2025-11-23 01:06:20.304454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:06:20.304483 | orchestrator | 2025-11-23 01:06:20.304489 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:06:20.304497 | orchestrator | Sunday 23 November 2025 01:03:50 +0000 (0:00:00.243) 0:00:00.243 ******* 2025-11-23 01:06:20.304503 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:06:20.304510 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:06:20.304516 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:06:20.304521 | orchestrator | 2025-11-23 01:06:20.304527 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:06:20.304533 | orchestrator | Sunday 23 November 2025 01:03:50 +0000 (0:00:00.274) 0:00:00.518 ******* 2025-11-23 01:06:20.304539 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-11-23 01:06:20.304546 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-11-23 01:06:20.304552 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-11-23 01:06:20.304557 | orchestrator | 2025-11-23 01:06:20.304563 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-11-23 01:06:20.304569 | orchestrator | 2025-11-23 01:06:20.304587 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-23 01:06:20.304593 | orchestrator | Sunday 23 November 2025 01:03:51 +0000 (0:00:00.387) 0:00:00.905 ******* 2025-11-23 01:06:20.304599 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:06:20.304605 | orchestrator | 2025-11-23 01:06:20.304611 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-11-23 01:06:20.304616 | orchestrator | Sunday 23 November 2025 01:03:51 +0000 (0:00:00.471) 0:00:01.377 ******* 2025-11-23 01:06:20.304626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.304634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.304641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.304647 | orchestrator | 2025-11-23 01:06:20.304653 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-11-23 01:06:20.304659 | orchestrator | Sunday 23 November 2025 01:03:52 +0000 (0:00:00.719) 0:00:02.096 ******* 2025-11-23 01:06:20.304665 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-11-23 01:06:20.304768 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-11-23 01:06:20.304780 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 01:06:20.304786 | orchestrator | 2025-11-23 01:06:20.304792 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-23 01:06:20.304798 | orchestrator | Sunday 23 November 2025 01:03:53 +0000 (0:00:00.732) 0:00:02.829 ******* 2025-11-23 01:06:20.304804 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:06:20.304810 | orchestrator | 2025-11-23 01:06:20.304815 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-11-23 01:06:20.304999 | orchestrator | Sunday 23 November 2025 01:03:53 +0000 (0:00:00.642) 0:00:03.472 ******* 2025-11-23 01:06:20.305021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305049 | orchestrator | 2025-11-23 01:06:20.305055 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-11-23 01:06:20.305061 | orchestrator | Sunday 23 November 2025 01:03:55 +0000 (0:00:01.444) 0:00:04.916 ******* 2025-11-23 01:06:20.305066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 01:06:20.305072 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:06:20.305079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 01:06:20.305093 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:06:20.305108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 01:06:20.305115 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:06:20.305120 | orchestrator | 2025-11-23 01:06:20.305126 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-11-23 01:06:20.305132 | orchestrator | Sunday 23 November 2025 01:03:55 +0000 (0:00:00.390) 0:00:05.307 ******* 2025-11-23 01:06:20.305142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 01:06:20.305148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 01:06:20.305155 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:06:20.305161 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:06:20.305222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-23 01:06:20.305230 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:06:20.305235 | orchestrator | 2025-11-23 01:06:20.305241 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-11-23 01:06:20.305469 | orchestrator | Sunday 23 November 2025 01:03:56 +0000 (0:00:01.049) 0:00:06.356 ******* 2025-11-23 01:06:20.305485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305524 | orchestrator | 2025-11-23 01:06:20.305529 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-11-23 01:06:20.305535 | orchestrator | Sunday 23 November 2025 01:03:57 +0000 (0:00:01.217) 0:00:07.574 ******* 2025-11-23 01:06:20.305547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.305571 | orchestrator | 2025-11-23 01:06:20.305576 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-11-23 01:06:20.305582 | orchestrator | Sunday 23 November 2025 01:03:59 +0000 (0:00:01.340) 0:00:08.915 ******* 2025-11-23 01:06:20.305587 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:06:20.305593 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:06:20.305599 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:06:20.305604 | orchestrator | 2025-11-23 01:06:20.305610 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-11-23 01:06:20.305616 | orchestrator | Sunday 23 November 2025 01:03:59 +0000 (0:00:00.498) 0:00:09.413 ******* 2025-11-23 01:06:20.305622 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-23 01:06:20.305628 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-23 01:06:20.305634 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-23 01:06:20.305639 | orchestrator | 2025-11-23 01:06:20.305645 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-11-23 01:06:20.305650 | orchestrator | Sunday 23 November 2025 01:04:01 +0000 (0:00:01.292) 0:00:10.706 ******* 2025-11-23 01:06:20.305656 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-23 01:06:20.305663 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-23 01:06:20.305669 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-23 01:06:20.305674 | orchestrator | 2025-11-23 01:06:20.305680 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-11-23 01:06:20.305686 | orchestrator | Sunday 23 November 2025 01:04:02 +0000 (0:00:01.330) 0:00:12.036 ******* 2025-11-23 01:06:20.305708 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-23 01:06:20.305715 | orchestrator | 2025-11-23 01:06:20.305721 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-11-23 01:06:20.305726 | orchestrator | Sunday 23 November 2025 01:04:03 +0000 (0:00:00.647) 0:00:12.684 ******* 2025-11-23 01:06:20.305732 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-11-23 01:06:20.305737 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-11-23 01:06:20.305743 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:06:20.305749 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:06:20.305755 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:06:20.305761 | orchestrator | 2025-11-23 01:06:20.305767 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-11-23 01:06:20.305772 | orchestrator | Sunday 23 November 2025 01:04:03 +0000 (0:00:00.709) 0:00:13.394 ******* 2025-11-23 01:06:20.305778 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:06:20.305784 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:06:20.305790 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:06:20.305796 | orchestrator | 2025-11-23 01:06:20.305801 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-11-23 01:06:20.305811 | orchestrator | Sunday 23 November 2025 01:04:04 +0000 (0:00:00.413) 0:00:13.808 ******* 2025-11-23 01:06:20.305818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1103028, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.643284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1103028, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.643284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1103028, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.643284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103095, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6549375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103095, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6549375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103095, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6549375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103046, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.645819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103046, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.645819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103046, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.645819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103096, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6559374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103096, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6559374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103096, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6559374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103068, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6499374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103068, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6499374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103068, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6499374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103090, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6539376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.305981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103090, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6539376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103090, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6539376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103025, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6423829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103025, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6423829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103025, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6423829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103035, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6439373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103035, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6439373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103035, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6439373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103050, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6459374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103050, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6459374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103050, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6459374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103078, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6520307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103078, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6520307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103078, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6520307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103094, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.65492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103094, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.65492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103094, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.65492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103037, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6452277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103037, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6452277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103037, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6452277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103088, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.653789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103088, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.653789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103088, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.653789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103071, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6513584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103071, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6513584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103071, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6513584, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103063, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6489375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103063, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6489375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103063, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6489375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103061, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6479375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103061, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6479375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103061, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6479375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103080, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6533716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103080, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6533716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103080, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6533716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103057, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6476834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103057, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6476834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103057, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6476834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103093, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6539376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103093, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6539376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103093, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6539376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1310744, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7669387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1310744, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7669387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1310744, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7669387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103122, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6789377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103122, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6789377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103122, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6789377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103106, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6599376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103106, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6599376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103106, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6599376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103174, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6819377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103174, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6819377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103174, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6819377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103102, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6573205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103102, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6573205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103102, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6573205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103204, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.700938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103204, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.700938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103204, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.700938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103175, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6969378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103175, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6969378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103175, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6969378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103210, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7019765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103210, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7019765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103210, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7019765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103307, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.765443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103307, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.765443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103307, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.765443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103200, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.698491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103200, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.698491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103200, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.698491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103169, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6803155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103169, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6803155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103169, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6803155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103119, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6628096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103119, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6628096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103119, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6628096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103167, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6803155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103167, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6803155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103167, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6803155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103111, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6619375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103111, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6619375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103111, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6619375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103170, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6809378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103170, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6809378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103170, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6809378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103215, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7639387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103215, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7639387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103215, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7639387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103213, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.703938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103213, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.703938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103213, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.703938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103103, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6576836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103103, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6576836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103103, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6576836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103104, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6579375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103104, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6579375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103104, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.6579375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103198, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.697938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103198, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.697938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103198, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.697938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103212, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7019765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103212, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7019765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103212, 'dev': 159, 'nlink': 1, 'atime': 1763856138.0, 'mtime': 1763856138.0, 'ctime': 1763856901.7019765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-23 01:06:20.306907 | orchestrator | 2025-11-23 01:06:20.306911 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-11-23 01:06:20.306915 | orchestrator | Sunday 23 November 2025 01:04:40 +0000 (0:00:36.434) 0:00:50.242 ******* 2025-11-23 01:06:20.306919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.306926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.306930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-23 01:06:20.306934 | orchestrator | 2025-11-23 01:06:20.306938 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-11-23 01:06:20.306942 | orchestrator | Sunday 23 November 2025 01:04:41 +0000 (0:00:01.051) 0:00:51.294 ******* 2025-11-23 01:06:20.306946 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:06:20.306950 | orchestrator | 2025-11-23 01:06:20.306953 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-11-23 01:06:20.306957 | orchestrator | Sunday 23 November 2025 01:04:44 +0000 (0:00:02.379) 0:00:53.673 ******* 2025-11-23 01:06:20.306961 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:06:20.306964 | orchestrator | 2025-11-23 01:06:20.306968 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-23 01:06:20.306972 | orchestrator | Sunday 23 November 2025 01:04:46 +0000 (0:00:02.550) 0:00:56.224 ******* 2025-11-23 01:06:20.306976 | orchestrator | 2025-11-23 01:06:20.306979 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-23 01:06:20.306986 | orchestrator | Sunday 23 November 2025 01:04:46 +0000 (0:00:00.129) 0:00:56.353 ******* 2025-11-23 01:06:20.306990 | orchestrator | 2025-11-23 01:06:20.306994 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-23 01:06:20.306997 | orchestrator | Sunday 23 November 2025 01:04:46 +0000 (0:00:00.136) 0:00:56.490 ******* 2025-11-23 01:06:20.307001 | orchestrator | 2025-11-23 01:06:20.307005 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-11-23 01:06:20.307009 | orchestrator | Sunday 23 November 2025 01:04:47 +0000 (0:00:00.376) 0:00:56.866 ******* 2025-11-23 01:06:20.307012 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:06:20.307016 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:06:20.307020 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:06:20.307024 | orchestrator | 2025-11-23 01:06:20.307027 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-11-23 01:06:20.307031 | orchestrator | Sunday 23 November 2025 01:04:49 +0000 (0:00:02.212) 0:00:59.079 ******* 2025-11-23 01:06:20.307035 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:06:20.307041 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:06:20.307045 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-11-23 01:06:20.307052 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-11-23 01:06:20.307056 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-11-23 01:06:20.307062 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2025-11-23 01:06:20.307069 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:06:20.307075 | orchestrator | 2025-11-23 01:06:20.307080 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-11-23 01:06:20.307087 | orchestrator | Sunday 23 November 2025 01:05:40 +0000 (0:00:51.073) 0:01:50.152 ******* 2025-11-23 01:06:20.307093 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:06:20.307099 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:06:20.307105 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:06:20.307110 | orchestrator | 2025-11-23 01:06:20.307116 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-11-23 01:06:20.307122 | orchestrator | Sunday 23 November 2025 01:06:12 +0000 (0:00:31.604) 0:02:21.756 ******* 2025-11-23 01:06:20.307129 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:06:20.307135 | orchestrator | 2025-11-23 01:06:20.307141 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-11-23 01:06:20.307147 | orchestrator | Sunday 23 November 2025 01:06:14 +0000 (0:00:02.212) 0:02:23.969 ******* 2025-11-23 01:06:20.307153 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:06:20.307159 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:06:20.307165 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:06:20.307171 | orchestrator | 2025-11-23 01:06:20.307178 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-11-23 01:06:20.307184 | orchestrator | Sunday 23 November 2025 01:06:14 +0000 (0:00:00.386) 0:02:24.355 ******* 2025-11-23 01:06:20.307191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-11-23 01:06:20.307200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-11-23 01:06:20.307207 | orchestrator | 2025-11-23 01:06:20.307213 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-11-23 01:06:20.307219 | orchestrator | Sunday 23 November 2025 01:06:17 +0000 (0:00:02.475) 0:02:26.831 ******* 2025-11-23 01:06:20.307225 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:06:20.307232 | orchestrator | 2025-11-23 01:06:20.307238 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:06:20.307260 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 01:06:20.307269 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 01:06:20.307276 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 01:06:20.307283 | orchestrator | 2025-11-23 01:06:20.307289 | orchestrator | 2025-11-23 01:06:20.307293 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:06:20.307296 | orchestrator | Sunday 23 November 2025 01:06:17 +0000 (0:00:00.241) 0:02:27.073 ******* 2025-11-23 01:06:20.307305 | orchestrator | =============================================================================== 2025-11-23 01:06:20.307309 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.07s 2025-11-23 01:06:20.307313 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.43s 2025-11-23 01:06:20.307316 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.60s 2025-11-23 01:06:20.307320 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.55s 2025-11-23 01:06:20.307328 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.48s 2025-11-23 01:06:20.307332 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.38s 2025-11-23 01:06:20.307335 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.21s 2025-11-23 01:06:20.307341 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.21s 2025-11-23 01:06:20.307347 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.44s 2025-11-23 01:06:20.307353 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2025-11-23 01:06:20.307359 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.33s 2025-11-23 01:06:20.307366 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.29s 2025-11-23 01:06:20.307372 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.22s 2025-11-23 01:06:20.307379 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2025-11-23 01:06:20.307388 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.05s 2025-11-23 01:06:20.307395 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.73s 2025-11-23 01:06:20.307403 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.72s 2025-11-23 01:06:20.307409 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.71s 2025-11-23 01:06:20.307413 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.65s 2025-11-23 01:06:20.307416 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.64s 2025-11-23 01:06:20.307420 | orchestrator | 2025-11-23 01:06:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:20.307794 | orchestrator | 2025-11-23 01:06:20 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:20.307808 | orchestrator | 2025-11-23 01:06:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:23.343973 | orchestrator | 2025-11-23 01:06:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:23.346242 | orchestrator | 2025-11-23 01:06:23 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:23.346400 | orchestrator | 2025-11-23 01:06:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:26.381360 | orchestrator | 2025-11-23 01:06:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:26.383568 | orchestrator | 2025-11-23 01:06:26 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:26.383613 | orchestrator | 2025-11-23 01:06:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:29.428046 | orchestrator | 2025-11-23 01:06:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:29.429739 | orchestrator | 2025-11-23 01:06:29 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:29.430888 | orchestrator | 2025-11-23 01:06:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:32.468329 | orchestrator | 2025-11-23 01:06:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:32.470237 | orchestrator | 2025-11-23 01:06:32 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:32.470287 | orchestrator | 2025-11-23 01:06:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:35.511741 | orchestrator | 2025-11-23 01:06:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:35.512438 | orchestrator | 2025-11-23 01:06:35 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:35.512546 | orchestrator | 2025-11-23 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:38.556139 | orchestrator | 2025-11-23 01:06:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:38.558164 | orchestrator | 2025-11-23 01:06:38 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:38.558207 | orchestrator | 2025-11-23 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:41.595154 | orchestrator | 2025-11-23 01:06:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:41.597115 | orchestrator | 2025-11-23 01:06:41 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:41.597203 | orchestrator | 2025-11-23 01:06:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:44.631567 | orchestrator | 2025-11-23 01:06:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:44.633346 | orchestrator | 2025-11-23 01:06:44 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:44.633435 | orchestrator | 2025-11-23 01:06:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:47.674738 | orchestrator | 2025-11-23 01:06:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:47.677076 | orchestrator | 2025-11-23 01:06:47 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:47.677375 | orchestrator | 2025-11-23 01:06:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:50.713995 | orchestrator | 2025-11-23 01:06:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:50.715664 | orchestrator | 2025-11-23 01:06:50 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:50.715936 | orchestrator | 2025-11-23 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:53.756899 | orchestrator | 2025-11-23 01:06:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:53.758318 | orchestrator | 2025-11-23 01:06:53 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:53.758557 | orchestrator | 2025-11-23 01:06:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:56.794690 | orchestrator | 2025-11-23 01:06:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:56.796595 | orchestrator | 2025-11-23 01:06:56 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:56.796644 | orchestrator | 2025-11-23 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:06:59.830276 | orchestrator | 2025-11-23 01:06:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:06:59.830877 | orchestrator | 2025-11-23 01:06:59 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:06:59.830916 | orchestrator | 2025-11-23 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:02.873732 | orchestrator | 2025-11-23 01:07:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:02.875694 | orchestrator | 2025-11-23 01:07:02 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:02.875718 | orchestrator | 2025-11-23 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:05.918309 | orchestrator | 2025-11-23 01:07:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:05.919988 | orchestrator | 2025-11-23 01:07:05 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:05.920053 | orchestrator | 2025-11-23 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:08.960434 | orchestrator | 2025-11-23 01:07:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:08.962338 | orchestrator | 2025-11-23 01:07:08 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:08.962373 | orchestrator | 2025-11-23 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:11.998947 | orchestrator | 2025-11-23 01:07:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:12.000356 | orchestrator | 2025-11-23 01:07:11 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:12.000432 | orchestrator | 2025-11-23 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:15.039655 | orchestrator | 2025-11-23 01:07:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:15.040480 | orchestrator | 2025-11-23 01:07:15 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:15.040515 | orchestrator | 2025-11-23 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:18.081023 | orchestrator | 2025-11-23 01:07:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:18.081127 | orchestrator | 2025-11-23 01:07:18 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:18.081153 | orchestrator | 2025-11-23 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:21.122518 | orchestrator | 2025-11-23 01:07:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:21.122646 | orchestrator | 2025-11-23 01:07:21 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:21.122673 | orchestrator | 2025-11-23 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:24.159969 | orchestrator | 2025-11-23 01:07:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:24.161838 | orchestrator | 2025-11-23 01:07:24 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:24.161870 | orchestrator | 2025-11-23 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:27.201446 | orchestrator | 2025-11-23 01:07:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:27.202738 | orchestrator | 2025-11-23 01:07:27 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:27.202787 | orchestrator | 2025-11-23 01:07:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:30.250072 | orchestrator | 2025-11-23 01:07:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:30.251309 | orchestrator | 2025-11-23 01:07:30 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:30.251384 | orchestrator | 2025-11-23 01:07:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:33.296168 | orchestrator | 2025-11-23 01:07:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:33.297421 | orchestrator | 2025-11-23 01:07:33 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:33.297454 | orchestrator | 2025-11-23 01:07:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:36.335991 | orchestrator | 2025-11-23 01:07:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:36.337786 | orchestrator | 2025-11-23 01:07:36 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:36.337811 | orchestrator | 2025-11-23 01:07:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:39.375598 | orchestrator | 2025-11-23 01:07:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:39.377582 | orchestrator | 2025-11-23 01:07:39 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:39.377651 | orchestrator | 2025-11-23 01:07:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:42.413173 | orchestrator | 2025-11-23 01:07:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:42.415597 | orchestrator | 2025-11-23 01:07:42 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:42.415677 | orchestrator | 2025-11-23 01:07:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:45.446837 | orchestrator | 2025-11-23 01:07:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:45.447511 | orchestrator | 2025-11-23 01:07:45 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:45.447652 | orchestrator | 2025-11-23 01:07:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:48.490140 | orchestrator | 2025-11-23 01:07:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:48.491344 | orchestrator | 2025-11-23 01:07:48 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:48.491378 | orchestrator | 2025-11-23 01:07:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:51.530788 | orchestrator | 2025-11-23 01:07:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:51.531399 | orchestrator | 2025-11-23 01:07:51 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:51.531434 | orchestrator | 2025-11-23 01:07:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:54.574453 | orchestrator | 2025-11-23 01:07:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:54.575831 | orchestrator | 2025-11-23 01:07:54 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:54.575872 | orchestrator | 2025-11-23 01:07:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:07:57.615904 | orchestrator | 2025-11-23 01:07:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:07:57.616547 | orchestrator | 2025-11-23 01:07:57 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:07:57.616577 | orchestrator | 2025-11-23 01:07:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:00.650955 | orchestrator | 2025-11-23 01:08:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:00.652473 | orchestrator | 2025-11-23 01:08:00 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:00.652532 | orchestrator | 2025-11-23 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:03.687385 | orchestrator | 2025-11-23 01:08:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:03.688067 | orchestrator | 2025-11-23 01:08:03 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:03.688659 | orchestrator | 2025-11-23 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:06.734711 | orchestrator | 2025-11-23 01:08:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:06.735516 | orchestrator | 2025-11-23 01:08:06 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:06.735752 | orchestrator | 2025-11-23 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:09.777158 | orchestrator | 2025-11-23 01:08:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:09.777988 | orchestrator | 2025-11-23 01:08:09 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:09.778071 | orchestrator | 2025-11-23 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:12.822641 | orchestrator | 2025-11-23 01:08:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:12.823646 | orchestrator | 2025-11-23 01:08:12 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:12.823685 | orchestrator | 2025-11-23 01:08:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:15.866957 | orchestrator | 2025-11-23 01:08:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:15.867084 | orchestrator | 2025-11-23 01:08:15 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:15.867112 | orchestrator | 2025-11-23 01:08:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:18.910677 | orchestrator | 2025-11-23 01:08:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:18.911728 | orchestrator | 2025-11-23 01:08:18 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:18.911766 | orchestrator | 2025-11-23 01:08:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:21.954656 | orchestrator | 2025-11-23 01:08:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:21.955911 | orchestrator | 2025-11-23 01:08:21 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:21.955948 | orchestrator | 2025-11-23 01:08:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:24.994850 | orchestrator | 2025-11-23 01:08:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:24.997486 | orchestrator | 2025-11-23 01:08:24 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:24.997519 | orchestrator | 2025-11-23 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:28.044735 | orchestrator | 2025-11-23 01:08:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:28.045854 | orchestrator | 2025-11-23 01:08:28 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:28.045901 | orchestrator | 2025-11-23 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:31.075125 | orchestrator | 2025-11-23 01:08:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:31.075537 | orchestrator | 2025-11-23 01:08:31 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:31.075572 | orchestrator | 2025-11-23 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:34.119610 | orchestrator | 2025-11-23 01:08:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:34.121297 | orchestrator | 2025-11-23 01:08:34 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:34.121331 | orchestrator | 2025-11-23 01:08:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:37.162910 | orchestrator | 2025-11-23 01:08:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:37.164378 | orchestrator | 2025-11-23 01:08:37 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:37.164421 | orchestrator | 2025-11-23 01:08:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:40.206785 | orchestrator | 2025-11-23 01:08:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:40.208708 | orchestrator | 2025-11-23 01:08:40 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:40.208779 | orchestrator | 2025-11-23 01:08:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:43.256855 | orchestrator | 2025-11-23 01:08:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:43.258279 | orchestrator | 2025-11-23 01:08:43 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:43.258393 | orchestrator | 2025-11-23 01:08:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:46.295485 | orchestrator | 2025-11-23 01:08:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:46.297238 | orchestrator | 2025-11-23 01:08:46 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:46.297302 | orchestrator | 2025-11-23 01:08:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:49.336707 | orchestrator | 2025-11-23 01:08:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:49.339294 | orchestrator | 2025-11-23 01:08:49 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:49.339332 | orchestrator | 2025-11-23 01:08:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:52.379744 | orchestrator | 2025-11-23 01:08:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:52.380492 | orchestrator | 2025-11-23 01:08:52 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:52.380681 | orchestrator | 2025-11-23 01:08:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:55.419063 | orchestrator | 2025-11-23 01:08:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:55.420359 | orchestrator | 2025-11-23 01:08:55 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:55.420389 | orchestrator | 2025-11-23 01:08:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:08:58.461898 | orchestrator | 2025-11-23 01:08:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:08:58.463959 | orchestrator | 2025-11-23 01:08:58 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:08:58.464025 | orchestrator | 2025-11-23 01:08:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:01.507901 | orchestrator | 2025-11-23 01:09:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:01.510496 | orchestrator | 2025-11-23 01:09:01 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:01.510533 | orchestrator | 2025-11-23 01:09:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:04.559516 | orchestrator | 2025-11-23 01:09:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:04.560754 | orchestrator | 2025-11-23 01:09:04 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:04.560818 | orchestrator | 2025-11-23 01:09:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:07.596444 | orchestrator | 2025-11-23 01:09:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:07.598155 | orchestrator | 2025-11-23 01:09:07 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:07.598225 | orchestrator | 2025-11-23 01:09:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:10.640107 | orchestrator | 2025-11-23 01:09:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:10.641728 | orchestrator | 2025-11-23 01:09:10 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:10.641988 | orchestrator | 2025-11-23 01:09:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:13.685684 | orchestrator | 2025-11-23 01:09:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:13.687062 | orchestrator | 2025-11-23 01:09:13 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:13.687143 | orchestrator | 2025-11-23 01:09:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:16.727147 | orchestrator | 2025-11-23 01:09:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:16.729471 | orchestrator | 2025-11-23 01:09:16 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:16.729533 | orchestrator | 2025-11-23 01:09:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:19.769922 | orchestrator | 2025-11-23 01:09:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:19.771394 | orchestrator | 2025-11-23 01:09:19 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:19.771447 | orchestrator | 2025-11-23 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:22.811102 | orchestrator | 2025-11-23 01:09:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:22.812570 | orchestrator | 2025-11-23 01:09:22 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:22.812602 | orchestrator | 2025-11-23 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:25.851444 | orchestrator | 2025-11-23 01:09:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:25.852939 | orchestrator | 2025-11-23 01:09:25 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:25.852995 | orchestrator | 2025-11-23 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:28.892982 | orchestrator | 2025-11-23 01:09:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:28.894432 | orchestrator | 2025-11-23 01:09:28 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:28.894494 | orchestrator | 2025-11-23 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:31.930921 | orchestrator | 2025-11-23 01:09:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:31.933002 | orchestrator | 2025-11-23 01:09:31 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:31.933129 | orchestrator | 2025-11-23 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:34.966640 | orchestrator | 2025-11-23 01:09:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:34.968124 | orchestrator | 2025-11-23 01:09:34 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:34.968197 | orchestrator | 2025-11-23 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:38.007300 | orchestrator | 2025-11-23 01:09:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:38.008211 | orchestrator | 2025-11-23 01:09:38 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:38.008244 | orchestrator | 2025-11-23 01:09:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:41.044753 | orchestrator | 2025-11-23 01:09:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:41.044877 | orchestrator | 2025-11-23 01:09:41 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:41.044900 | orchestrator | 2025-11-23 01:09:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:44.089453 | orchestrator | 2025-11-23 01:09:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:44.091122 | orchestrator | 2025-11-23 01:09:44 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:44.091238 | orchestrator | 2025-11-23 01:09:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:47.135257 | orchestrator | 2025-11-23 01:09:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:47.136719 | orchestrator | 2025-11-23 01:09:47 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:47.136979 | orchestrator | 2025-11-23 01:09:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:50.175320 | orchestrator | 2025-11-23 01:09:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:50.177447 | orchestrator | 2025-11-23 01:09:50 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:50.177586 | orchestrator | 2025-11-23 01:09:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:53.219504 | orchestrator | 2025-11-23 01:09:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:53.219993 | orchestrator | 2025-11-23 01:09:53 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:53.220025 | orchestrator | 2025-11-23 01:09:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:56.265082 | orchestrator | 2025-11-23 01:09:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:56.267124 | orchestrator | 2025-11-23 01:09:56 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state STARTED 2025-11-23 01:09:56.268219 | orchestrator | 2025-11-23 01:09:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:09:59.306002 | orchestrator | 2025-11-23 01:09:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:09:59.306850 | orchestrator | 2025-11-23 01:09:59 | INFO  | Task 9d0e56d0-6ebb-46eb-8446-d40e3633773b is in state SUCCESS 2025-11-23 01:09:59.309080 | orchestrator | 2025-11-23 01:09:59.309181 | orchestrator | 2025-11-23 01:09:59.309319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 01:09:59.309344 | orchestrator | 2025-11-23 01:09:59.309364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 01:09:59.309384 | orchestrator | Sunday 23 November 2025 01:05:12 +0000 (0:00:00.230) 0:00:00.230 ******* 2025-11-23 01:09:59.309405 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.309427 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:09:59.309478 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:09:59.309498 | orchestrator | 2025-11-23 01:09:59.309516 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 01:09:59.309533 | orchestrator | Sunday 23 November 2025 01:05:12 +0000 (0:00:00.266) 0:00:00.496 ******* 2025-11-23 01:09:59.309551 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-11-23 01:09:59.309570 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-11-23 01:09:59.309589 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-11-23 01:09:59.309606 | orchestrator | 2025-11-23 01:09:59.309623 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-11-23 01:09:59.309641 | orchestrator | 2025-11-23 01:09:59.309659 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-23 01:09:59.309677 | orchestrator | Sunday 23 November 2025 01:05:12 +0000 (0:00:00.353) 0:00:00.850 ******* 2025-11-23 01:09:59.309695 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:09:59.309714 | orchestrator | 2025-11-23 01:09:59.309733 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-11-23 01:09:59.309751 | orchestrator | Sunday 23 November 2025 01:05:13 +0000 (0:00:00.488) 0:00:01.338 ******* 2025-11-23 01:09:59.309768 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-11-23 01:09:59.309786 | orchestrator | 2025-11-23 01:09:59.309805 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-11-23 01:09:59.309935 | orchestrator | Sunday 23 November 2025 01:05:16 +0000 (0:00:03.620) 0:00:04.959 ******* 2025-11-23 01:09:59.309954 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-11-23 01:09:59.309966 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-11-23 01:09:59.309977 | orchestrator | 2025-11-23 01:09:59.309988 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-11-23 01:09:59.310000 | orchestrator | Sunday 23 November 2025 01:05:23 +0000 (0:00:06.836) 0:00:11.796 ******* 2025-11-23 01:09:59.310011 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-23 01:09:59.310079 | orchestrator | 2025-11-23 01:09:59.310091 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-11-23 01:09:59.310102 | orchestrator | Sunday 23 November 2025 01:05:27 +0000 (0:00:03.424) 0:00:15.220 ******* 2025-11-23 01:09:59.310113 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-23 01:09:59.310124 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-23 01:09:59.310190 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-23 01:09:59.310217 | orchestrator | 2025-11-23 01:09:59.310234 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-11-23 01:09:59.310252 | orchestrator | Sunday 23 November 2025 01:05:35 +0000 (0:00:08.244) 0:00:23.465 ******* 2025-11-23 01:09:59.310268 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-23 01:09:59.310285 | orchestrator | 2025-11-23 01:09:59.310340 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-11-23 01:09:59.310359 | orchestrator | Sunday 23 November 2025 01:05:39 +0000 (0:00:03.599) 0:00:27.065 ******* 2025-11-23 01:09:59.310377 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-23 01:09:59.310395 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-23 01:09:59.310414 | orchestrator | 2025-11-23 01:09:59.310433 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-11-23 01:09:59.310452 | orchestrator | Sunday 23 November 2025 01:05:46 +0000 (0:00:07.671) 0:00:34.736 ******* 2025-11-23 01:09:59.310470 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-11-23 01:09:59.310489 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-11-23 01:09:59.310507 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-11-23 01:09:59.310525 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-11-23 01:09:59.310543 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-11-23 01:09:59.310561 | orchestrator | 2025-11-23 01:09:59.310583 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-23 01:09:59.310601 | orchestrator | Sunday 23 November 2025 01:06:02 +0000 (0:00:16.111) 0:00:50.848 ******* 2025-11-23 01:09:59.310620 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:09:59.310640 | orchestrator | 2025-11-23 01:09:59.310658 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-11-23 01:09:59.310676 | orchestrator | Sunday 23 November 2025 01:06:03 +0000 (0:00:00.496) 0:00:51.344 ******* 2025-11-23 01:09:59.310695 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.310714 | orchestrator | 2025-11-23 01:09:59.310750 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-11-23 01:09:59.310769 | orchestrator | Sunday 23 November 2025 01:06:07 +0000 (0:00:04.402) 0:00:55.746 ******* 2025-11-23 01:09:59.310788 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.310800 | orchestrator | 2025-11-23 01:09:59.310811 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-23 01:09:59.310933 | orchestrator | Sunday 23 November 2025 01:06:12 +0000 (0:00:04.793) 0:01:00.539 ******* 2025-11-23 01:09:59.310948 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.310959 | orchestrator | 2025-11-23 01:09:59.310970 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-11-23 01:09:59.310981 | orchestrator | Sunday 23 November 2025 01:06:15 +0000 (0:00:03.343) 0:01:03.883 ******* 2025-11-23 01:09:59.310991 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-23 01:09:59.311003 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-23 01:09:59.311053 | orchestrator | 2025-11-23 01:09:59.311065 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-11-23 01:09:59.311075 | orchestrator | Sunday 23 November 2025 01:06:27 +0000 (0:00:11.255) 0:01:15.139 ******* 2025-11-23 01:09:59.311086 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-11-23 01:09:59.311124 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-11-23 01:09:59.311159 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-11-23 01:09:59.311171 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-11-23 01:09:59.311182 | orchestrator | 2025-11-23 01:09:59.311193 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-11-23 01:09:59.311204 | orchestrator | Sunday 23 November 2025 01:06:43 +0000 (0:00:16.193) 0:01:31.332 ******* 2025-11-23 01:09:59.311231 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.311245 | orchestrator | 2025-11-23 01:09:59.311351 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-11-23 01:09:59.311366 | orchestrator | Sunday 23 November 2025 01:06:48 +0000 (0:00:04.986) 0:01:36.318 ******* 2025-11-23 01:09:59.311382 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.311398 | orchestrator | 2025-11-23 01:09:59.311414 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-11-23 01:09:59.311432 | orchestrator | Sunday 23 November 2025 01:06:54 +0000 (0:00:06.097) 0:01:42.416 ******* 2025-11-23 01:09:59.311449 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:09:59.311466 | orchestrator | 2025-11-23 01:09:59.311477 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-11-23 01:09:59.311488 | orchestrator | Sunday 23 November 2025 01:06:54 +0000 (0:00:00.197) 0:01:42.613 ******* 2025-11-23 01:09:59.311500 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.311511 | orchestrator | 2025-11-23 01:09:59.311522 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-23 01:09:59.311534 | orchestrator | Sunday 23 November 2025 01:06:58 +0000 (0:00:03.936) 0:01:46.549 ******* 2025-11-23 01:09:59.311545 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:09:59.311556 | orchestrator | 2025-11-23 01:09:59.311567 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-11-23 01:09:59.311578 | orchestrator | Sunday 23 November 2025 01:06:59 +0000 (0:00:00.856) 0:01:47.406 ******* 2025-11-23 01:09:59.311628 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.311639 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.311681 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.311693 | orchestrator | 2025-11-23 01:09:59.311703 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-11-23 01:09:59.311713 | orchestrator | Sunday 23 November 2025 01:07:04 +0000 (0:00:05.253) 0:01:52.660 ******* 2025-11-23 01:09:59.311722 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.311732 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.311742 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.311751 | orchestrator | 2025-11-23 01:09:59.311761 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-11-23 01:09:59.311771 | orchestrator | Sunday 23 November 2025 01:07:09 +0000 (0:00:04.949) 0:01:57.609 ******* 2025-11-23 01:09:59.311780 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.311790 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.311799 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.311809 | orchestrator | 2025-11-23 01:09:59.311819 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-11-23 01:09:59.311828 | orchestrator | Sunday 23 November 2025 01:07:10 +0000 (0:00:00.784) 0:01:58.394 ******* 2025-11-23 01:09:59.311838 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:09:59.311847 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:09:59.311857 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.311867 | orchestrator | 2025-11-23 01:09:59.311876 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-11-23 01:09:59.311886 | orchestrator | Sunday 23 November 2025 01:07:12 +0000 (0:00:01.998) 0:02:00.392 ******* 2025-11-23 01:09:59.311895 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.311905 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.311915 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.311924 | orchestrator | 2025-11-23 01:09:59.311934 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-11-23 01:09:59.311944 | orchestrator | Sunday 23 November 2025 01:07:13 +0000 (0:00:01.194) 0:02:01.587 ******* 2025-11-23 01:09:59.311953 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.311963 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.311972 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.311993 | orchestrator | 2025-11-23 01:09:59.312012 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-11-23 01:09:59.312022 | orchestrator | Sunday 23 November 2025 01:07:14 +0000 (0:00:01.140) 0:02:02.727 ******* 2025-11-23 01:09:59.312031 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.312041 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.312051 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.312060 | orchestrator | 2025-11-23 01:09:59.312083 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-11-23 01:09:59.312093 | orchestrator | Sunday 23 November 2025 01:07:16 +0000 (0:00:02.101) 0:02:04.829 ******* 2025-11-23 01:09:59.312103 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.312112 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.312122 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.312188 | orchestrator | 2025-11-23 01:09:59.312198 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-11-23 01:09:59.312208 | orchestrator | Sunday 23 November 2025 01:07:18 +0000 (0:00:01.711) 0:02:06.541 ******* 2025-11-23 01:09:59.312218 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.312227 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:09:59.312237 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:09:59.312246 | orchestrator | 2025-11-23 01:09:59.312256 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-11-23 01:09:59.312266 | orchestrator | Sunday 23 November 2025 01:07:19 +0000 (0:00:00.596) 0:02:07.138 ******* 2025-11-23 01:09:59.312275 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:09:59.312285 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:09:59.312294 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.312304 | orchestrator | 2025-11-23 01:09:59.312313 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-23 01:09:59.312323 | orchestrator | Sunday 23 November 2025 01:07:21 +0000 (0:00:02.817) 0:02:09.955 ******* 2025-11-23 01:09:59.312333 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:09:59.312342 | orchestrator | 2025-11-23 01:09:59.312352 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-11-23 01:09:59.312361 | orchestrator | Sunday 23 November 2025 01:07:22 +0000 (0:00:00.565) 0:02:10.521 ******* 2025-11-23 01:09:59.312371 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.312380 | orchestrator | 2025-11-23 01:09:59.312390 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-23 01:09:59.312399 | orchestrator | Sunday 23 November 2025 01:07:25 +0000 (0:00:03.434) 0:02:13.955 ******* 2025-11-23 01:09:59.312409 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.312418 | orchestrator | 2025-11-23 01:09:59.312428 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-11-23 01:09:59.312437 | orchestrator | Sunday 23 November 2025 01:07:29 +0000 (0:00:03.284) 0:02:17.240 ******* 2025-11-23 01:09:59.312447 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-23 01:09:59.312457 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-23 01:09:59.312466 | orchestrator | 2025-11-23 01:09:59.312476 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-11-23 01:09:59.312486 | orchestrator | Sunday 23 November 2025 01:07:36 +0000 (0:00:07.322) 0:02:24.563 ******* 2025-11-23 01:09:59.312495 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.312505 | orchestrator | 2025-11-23 01:09:59.312515 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-11-23 01:09:59.312524 | orchestrator | Sunday 23 November 2025 01:07:39 +0000 (0:00:03.452) 0:02:28.015 ******* 2025-11-23 01:09:59.312534 | orchestrator | ok: [testbed-node-0] 2025-11-23 01:09:59.312543 | orchestrator | ok: [testbed-node-1] 2025-11-23 01:09:59.312553 | orchestrator | ok: [testbed-node-2] 2025-11-23 01:09:59.312562 | orchestrator | 2025-11-23 01:09:59.312572 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-11-23 01:09:59.312589 | orchestrator | Sunday 23 November 2025 01:07:40 +0000 (0:00:00.278) 0:02:28.294 ******* 2025-11-23 01:09:59.312603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.312629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.312641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.312652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.312664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.312682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.312693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.312811 | orchestrator | 2025-11-23 01:09:59.312821 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-11-23 01:09:59.312831 | orchestrator | Sunday 23 November 2025 01:07:42 +0000 (0:00:02.288) 0:02:30.582 ******* 2025-11-23 01:09:59.312840 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:09:59.312850 | orchestrator | 2025-11-23 01:09:59.312865 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-11-23 01:09:59.312875 | orchestrator | Sunday 23 November 2025 01:07:42 +0000 (0:00:00.141) 0:02:30.724 ******* 2025-11-23 01:09:59.312885 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:09:59.312895 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:09:59.312905 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:09:59.312914 | orchestrator | 2025-11-23 01:09:59.312924 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-11-23 01:09:59.312933 | orchestrator | Sunday 23 November 2025 01:07:43 +0000 (0:00:00.373) 0:02:31.097 ******* 2025-11-23 01:09:59.312943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.312961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.312971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.312982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.312992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313002 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:09:59.313024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313082 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:09:59.313102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313185 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:09:59.313195 | orchestrator | 2025-11-23 01:09:59.313205 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-23 01:09:59.313216 | orchestrator | Sunday 23 November 2025 01:07:43 +0000 (0:00:00.610) 0:02:31.708 ******* 2025-11-23 01:09:59.313233 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-23 01:09:59.313249 | orchestrator | 2025-11-23 01:09:59.313265 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-11-23 01:09:59.313282 | orchestrator | Sunday 23 November 2025 01:07:44 +0000 (0:00:00.494) 0:02:32.203 ******* 2025-11-23 01:09:59.313299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.313326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.313338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.313355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.313366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.313376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.313386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.313504 | orchestrator | 2025-11-23 01:09:59.313514 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-11-23 01:09:59.313523 | orchestrator | Sunday 23 November 2025 01:07:49 +0000 (0:00:05.177) 0:02:37.381 ******* 2025-11-23 01:09:59.313542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313596 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:09:59.313617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313675 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:09:59.313685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313753 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:09:59.313763 | orchestrator | 2025-11-23 01:09:59.313773 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-11-23 01:09:59.313782 | orchestrator | Sunday 23 November 2025 01:07:50 +0000 (0:00:00.753) 0:02:38.134 ******* 2025-11-23 01:09:59.313793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313859 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:09:59.313869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.313940 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:09:59.313950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-23 01:09:59.313960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-23 01:09:59.313970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-23 01:09:59.313990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-23 01:09:59.314006 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:09:59.314368 | orchestrator | 2025-11-23 01:09:59.314388 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-11-23 01:09:59.314402 | orchestrator | Sunday 23 November 2025 01:07:50 +0000 (0:00:00.792) 0:02:38.927 ******* 2025-11-23 01:09:59.314419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.314429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.314437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.314446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.314462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.314474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.314488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314579 | orchestrator | 2025-11-23 01:09:59.314587 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-11-23 01:09:59.314595 | orchestrator | Sunday 23 November 2025 01:07:56 +0000 (0:00:05.220) 0:02:44.147 ******* 2025-11-23 01:09:59.314603 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-23 01:09:59.314611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-23 01:09:59.314619 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-23 01:09:59.314627 | orchestrator | 2025-11-23 01:09:59.314635 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-11-23 01:09:59.314642 | orchestrator | Sunday 23 November 2025 01:07:57 +0000 (0:00:01.687) 0:02:45.835 ******* 2025-11-23 01:09:59.314651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.314668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.314682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.314691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.314699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.314707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.314721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.314809 | orchestrator | 2025-11-23 01:09:59.314817 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-11-23 01:09:59.314825 | orchestrator | Sunday 23 November 2025 01:08:12 +0000 (0:00:14.751) 0:03:00.587 ******* 2025-11-23 01:09:59.314833 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.314841 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.314849 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.314857 | orchestrator | 2025-11-23 01:09:59.314868 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-11-23 01:09:59.314876 | orchestrator | Sunday 23 November 2025 01:08:14 +0000 (0:00:01.459) 0:03:02.046 ******* 2025-11-23 01:09:59.314884 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.314892 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.314903 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.314911 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.314919 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.314927 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.314935 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.314943 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.314950 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.314958 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-23 01:09:59.314966 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-23 01:09:59.314973 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-23 01:09:59.314981 | orchestrator | 2025-11-23 01:09:59.314989 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-11-23 01:09:59.314999 | orchestrator | Sunday 23 November 2025 01:08:18 +0000 (0:00:04.870) 0:03:06.917 ******* 2025-11-23 01:09:59.315007 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.315016 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.315025 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.315034 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.315050 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.315059 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.315068 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.315077 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.315086 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.315095 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-23 01:09:59.315103 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-23 01:09:59.315112 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-23 01:09:59.315121 | orchestrator | 2025-11-23 01:09:59.315149 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-11-23 01:09:59.315159 | orchestrator | Sunday 23 November 2025 01:08:23 +0000 (0:00:05.059) 0:03:11.977 ******* 2025-11-23 01:09:59.315168 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.315178 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.315204 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-23 01:09:59.315214 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.315231 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.315241 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-23 01:09:59.315251 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.315260 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.315269 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-23 01:09:59.315278 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-23 01:09:59.315286 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-23 01:09:59.315295 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-23 01:09:59.315304 | orchestrator | 2025-11-23 01:09:59.315313 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-11-23 01:09:59.315322 | orchestrator | Sunday 23 November 2025 01:08:28 +0000 (0:00:04.837) 0:03:16.815 ******* 2025-11-23 01:09:59.315332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.315352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.315371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-23 01:09:59.315380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.315388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.315396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-23 01:09:59.315405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-23 01:09:59.315506 | orchestrator | 2025-11-23 01:09:59.315515 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-23 01:09:59.315523 | orchestrator | Sunday 23 November 2025 01:08:32 +0000 (0:00:03.544) 0:03:20.360 ******* 2025-11-23 01:09:59.315530 | orchestrator | skipping: [testbed-node-0] 2025-11-23 01:09:59.315538 | orchestrator | skipping: [testbed-node-1] 2025-11-23 01:09:59.315546 | orchestrator | skipping: [testbed-node-2] 2025-11-23 01:09:59.315554 | orchestrator | 2025-11-23 01:09:59.315562 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-11-23 01:09:59.315569 | orchestrator | Sunday 23 November 2025 01:08:32 +0000 (0:00:00.294) 0:03:20.654 ******* 2025-11-23 01:09:59.315577 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315585 | orchestrator | 2025-11-23 01:09:59.315592 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-11-23 01:09:59.315600 | orchestrator | Sunday 23 November 2025 01:08:34 +0000 (0:00:02.179) 0:03:22.834 ******* 2025-11-23 01:09:59.315608 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315621 | orchestrator | 2025-11-23 01:09:59.315633 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-11-23 01:09:59.315641 | orchestrator | Sunday 23 November 2025 01:08:36 +0000 (0:00:02.137) 0:03:24.972 ******* 2025-11-23 01:09:59.315648 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315656 | orchestrator | 2025-11-23 01:09:59.315664 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-11-23 01:09:59.315672 | orchestrator | Sunday 23 November 2025 01:08:39 +0000 (0:00:02.378) 0:03:27.351 ******* 2025-11-23 01:09:59.315680 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315687 | orchestrator | 2025-11-23 01:09:59.315695 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-11-23 01:09:59.315703 | orchestrator | Sunday 23 November 2025 01:08:41 +0000 (0:00:02.525) 0:03:29.876 ******* 2025-11-23 01:09:59.315711 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315718 | orchestrator | 2025-11-23 01:09:59.315726 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-23 01:09:59.315734 | orchestrator | Sunday 23 November 2025 01:09:03 +0000 (0:00:22.087) 0:03:51.964 ******* 2025-11-23 01:09:59.315742 | orchestrator | 2025-11-23 01:09:59.315750 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-23 01:09:59.315757 | orchestrator | Sunday 23 November 2025 01:09:04 +0000 (0:00:00.059) 0:03:52.024 ******* 2025-11-23 01:09:59.315765 | orchestrator | 2025-11-23 01:09:59.315773 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-23 01:09:59.315780 | orchestrator | Sunday 23 November 2025 01:09:04 +0000 (0:00:00.062) 0:03:52.086 ******* 2025-11-23 01:09:59.315788 | orchestrator | 2025-11-23 01:09:59.315796 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-11-23 01:09:59.315804 | orchestrator | Sunday 23 November 2025 01:09:04 +0000 (0:00:00.062) 0:03:52.149 ******* 2025-11-23 01:09:59.315812 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315819 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.315827 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.315835 | orchestrator | 2025-11-23 01:09:59.315843 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-11-23 01:09:59.315850 | orchestrator | Sunday 23 November 2025 01:09:20 +0000 (0:00:16.748) 0:04:08.897 ******* 2025-11-23 01:09:59.315858 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315866 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.315874 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.315881 | orchestrator | 2025-11-23 01:09:59.315889 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-11-23 01:09:59.315902 | orchestrator | Sunday 23 November 2025 01:09:32 +0000 (0:00:11.779) 0:04:20.677 ******* 2025-11-23 01:09:59.315910 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315918 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.315925 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.315933 | orchestrator | 2025-11-23 01:09:59.315941 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-11-23 01:09:59.315949 | orchestrator | Sunday 23 November 2025 01:09:42 +0000 (0:00:10.295) 0:04:30.973 ******* 2025-11-23 01:09:59.315956 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.315964 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.315972 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.315980 | orchestrator | 2025-11-23 01:09:59.315988 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-11-23 01:09:59.315996 | orchestrator | Sunday 23 November 2025 01:09:52 +0000 (0:00:09.845) 0:04:40.818 ******* 2025-11-23 01:09:59.316003 | orchestrator | changed: [testbed-node-0] 2025-11-23 01:09:59.316011 | orchestrator | changed: [testbed-node-1] 2025-11-23 01:09:59.316019 | orchestrator | changed: [testbed-node-2] 2025-11-23 01:09:59.316027 | orchestrator | 2025-11-23 01:09:59.316034 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 01:09:59.316043 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-23 01:09:59.316051 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 01:09:59.316063 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-23 01:09:59.316071 | orchestrator | 2025-11-23 01:09:59.316078 | orchestrator | 2025-11-23 01:09:59.316086 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 01:09:59.316094 | orchestrator | Sunday 23 November 2025 01:09:58 +0000 (0:00:05.317) 0:04:46.136 ******* 2025-11-23 01:09:59.316106 | orchestrator | =============================================================================== 2025-11-23 01:09:59.316114 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.09s 2025-11-23 01:09:59.316122 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.75s 2025-11-23 01:09:59.316150 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.19s 2025-11-23 01:09:59.316159 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.11s 2025-11-23 01:09:59.316167 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 14.75s 2025-11-23 01:09:59.316174 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.78s 2025-11-23 01:09:59.316182 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.26s 2025-11-23 01:09:59.316190 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.30s 2025-11-23 01:09:59.316197 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.85s 2025-11-23 01:09:59.316205 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.25s 2025-11-23 01:09:59.316213 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.67s 2025-11-23 01:09:59.316220 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.32s 2025-11-23 01:09:59.316228 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.84s 2025-11-23 01:09:59.316236 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.10s 2025-11-23 01:09:59.316243 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.32s 2025-11-23 01:09:59.316251 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.25s 2025-11-23 01:09:59.316265 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.22s 2025-11-23 01:09:59.316272 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.18s 2025-11-23 01:09:59.316280 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.06s 2025-11-23 01:09:59.316288 | orchestrator | octavia : Create loadbalancer management network ------------------------ 4.99s 2025-11-23 01:09:59.316295 | orchestrator | 2025-11-23 01:09:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:02.351094 | orchestrator | 2025-11-23 01:10:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:02.351226 | orchestrator | 2025-11-23 01:10:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:05.393252 | orchestrator | 2025-11-23 01:10:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:05.393722 | orchestrator | 2025-11-23 01:10:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:08.427388 | orchestrator | 2025-11-23 01:10:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:08.427479 | orchestrator | 2025-11-23 01:10:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:11.470461 | orchestrator | 2025-11-23 01:10:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:11.470558 | orchestrator | 2025-11-23 01:10:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:14.512997 | orchestrator | 2025-11-23 01:10:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:14.513095 | orchestrator | 2025-11-23 01:10:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:17.557025 | orchestrator | 2025-11-23 01:10:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:17.557158 | orchestrator | 2025-11-23 01:10:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:20.593720 | orchestrator | 2025-11-23 01:10:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:20.593862 | orchestrator | 2025-11-23 01:10:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:23.636495 | orchestrator | 2025-11-23 01:10:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:23.636614 | orchestrator | 2025-11-23 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:26.676406 | orchestrator | 2025-11-23 01:10:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:26.676508 | orchestrator | 2025-11-23 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:29.715304 | orchestrator | 2025-11-23 01:10:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:29.715426 | orchestrator | 2025-11-23 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:32.752715 | orchestrator | 2025-11-23 01:10:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:32.752815 | orchestrator | 2025-11-23 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:35.798608 | orchestrator | 2025-11-23 01:10:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:35.798736 | orchestrator | 2025-11-23 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:38.841005 | orchestrator | 2025-11-23 01:10:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:38.841072 | orchestrator | 2025-11-23 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:41.887665 | orchestrator | 2025-11-23 01:10:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:41.887754 | orchestrator | 2025-11-23 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:44.929425 | orchestrator | 2025-11-23 01:10:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:44.929542 | orchestrator | 2025-11-23 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:47.966407 | orchestrator | 2025-11-23 01:10:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:47.966499 | orchestrator | 2025-11-23 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:51.013319 | orchestrator | 2025-11-23 01:10:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:51.013409 | orchestrator | 2025-11-23 01:10:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:54.044738 | orchestrator | 2025-11-23 01:10:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:54.044819 | orchestrator | 2025-11-23 01:10:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:10:57.088718 | orchestrator | 2025-11-23 01:10:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:10:57.088822 | orchestrator | 2025-11-23 01:10:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:00.127694 | orchestrator | 2025-11-23 01:11:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:00.127794 | orchestrator | 2025-11-23 01:11:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:03.165366 | orchestrator | 2025-11-23 01:11:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:03.165452 | orchestrator | 2025-11-23 01:11:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:06.207364 | orchestrator | 2025-11-23 01:11:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:06.207448 | orchestrator | 2025-11-23 01:11:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:09.247735 | orchestrator | 2025-11-23 01:11:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:09.247823 | orchestrator | 2025-11-23 01:11:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:12.289204 | orchestrator | 2025-11-23 01:11:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:12.289311 | orchestrator | 2025-11-23 01:11:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:15.326900 | orchestrator | 2025-11-23 01:11:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:15.326968 | orchestrator | 2025-11-23 01:11:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:18.369409 | orchestrator | 2025-11-23 01:11:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:18.369529 | orchestrator | 2025-11-23 01:11:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:21.412492 | orchestrator | 2025-11-23 01:11:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:21.412687 | orchestrator | 2025-11-23 01:11:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:24.450578 | orchestrator | 2025-11-23 01:11:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:24.450676 | orchestrator | 2025-11-23 01:11:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:27.486661 | orchestrator | 2025-11-23 01:11:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:27.486783 | orchestrator | 2025-11-23 01:11:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:30.528076 | orchestrator | 2025-11-23 01:11:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:30.528272 | orchestrator | 2025-11-23 01:11:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:33.569524 | orchestrator | 2025-11-23 01:11:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:33.569627 | orchestrator | 2025-11-23 01:11:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:36.606473 | orchestrator | 2025-11-23 01:11:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:36.606561 | orchestrator | 2025-11-23 01:11:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:39.645653 | orchestrator | 2025-11-23 01:11:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:39.645800 | orchestrator | 2025-11-23 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:42.685559 | orchestrator | 2025-11-23 01:11:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:42.685661 | orchestrator | 2025-11-23 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:45.726107 | orchestrator | 2025-11-23 01:11:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:45.726258 | orchestrator | 2025-11-23 01:11:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:48.762793 | orchestrator | 2025-11-23 01:11:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:48.762884 | orchestrator | 2025-11-23 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:51.803662 | orchestrator | 2025-11-23 01:11:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:51.803763 | orchestrator | 2025-11-23 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:54.846614 | orchestrator | 2025-11-23 01:11:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:54.846719 | orchestrator | 2025-11-23 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:11:57.885590 | orchestrator | 2025-11-23 01:11:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:11:57.885690 | orchestrator | 2025-11-23 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:00.928527 | orchestrator | 2025-11-23 01:12:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:00.928637 | orchestrator | 2025-11-23 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:03.965234 | orchestrator | 2025-11-23 01:12:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:03.965339 | orchestrator | 2025-11-23 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:07.003070 | orchestrator | 2025-11-23 01:12:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:07.003199 | orchestrator | 2025-11-23 01:12:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:10.042249 | orchestrator | 2025-11-23 01:12:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:10.042349 | orchestrator | 2025-11-23 01:12:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:13.080679 | orchestrator | 2025-11-23 01:12:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:13.080778 | orchestrator | 2025-11-23 01:12:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:16.120149 | orchestrator | 2025-11-23 01:12:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:16.120311 | orchestrator | 2025-11-23 01:12:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:19.158451 | orchestrator | 2025-11-23 01:12:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:19.158548 | orchestrator | 2025-11-23 01:12:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:22.205966 | orchestrator | 2025-11-23 01:12:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:22.206103 | orchestrator | 2025-11-23 01:12:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:25.247989 | orchestrator | 2025-11-23 01:12:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:25.248118 | orchestrator | 2025-11-23 01:12:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:28.290261 | orchestrator | 2025-11-23 01:12:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:28.290343 | orchestrator | 2025-11-23 01:12:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:31.325488 | orchestrator | 2025-11-23 01:12:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:31.325600 | orchestrator | 2025-11-23 01:12:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:34.364362 | orchestrator | 2025-11-23 01:12:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:34.364460 | orchestrator | 2025-11-23 01:12:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:37.400367 | orchestrator | 2025-11-23 01:12:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:37.400460 | orchestrator | 2025-11-23 01:12:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:40.443022 | orchestrator | 2025-11-23 01:12:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:40.443123 | orchestrator | 2025-11-23 01:12:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:43.485170 | orchestrator | 2025-11-23 01:12:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:43.485323 | orchestrator | 2025-11-23 01:12:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:46.526157 | orchestrator | 2025-11-23 01:12:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:46.526305 | orchestrator | 2025-11-23 01:12:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:49.567559 | orchestrator | 2025-11-23 01:12:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:49.567662 | orchestrator | 2025-11-23 01:12:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:52.602923 | orchestrator | 2025-11-23 01:12:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:52.603050 | orchestrator | 2025-11-23 01:12:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:55.640504 | orchestrator | 2025-11-23 01:12:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:55.640658 | orchestrator | 2025-11-23 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:12:58.682553 | orchestrator | 2025-11-23 01:12:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:12:58.682655 | orchestrator | 2025-11-23 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:01.723377 | orchestrator | 2025-11-23 01:13:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:01.723480 | orchestrator | 2025-11-23 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:04.762330 | orchestrator | 2025-11-23 01:13:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:04.762428 | orchestrator | 2025-11-23 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:07.800405 | orchestrator | 2025-11-23 01:13:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:07.800482 | orchestrator | 2025-11-23 01:13:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:10.840097 | orchestrator | 2025-11-23 01:13:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:10.840192 | orchestrator | 2025-11-23 01:13:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:13.883348 | orchestrator | 2025-11-23 01:13:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:13.883448 | orchestrator | 2025-11-23 01:13:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:16.930322 | orchestrator | 2025-11-23 01:13:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:16.930430 | orchestrator | 2025-11-23 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:19.969936 | orchestrator | 2025-11-23 01:13:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:19.970133 | orchestrator | 2025-11-23 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:23.006667 | orchestrator | 2025-11-23 01:13:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:23.006765 | orchestrator | 2025-11-23 01:13:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:26.045274 | orchestrator | 2025-11-23 01:13:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:26.045364 | orchestrator | 2025-11-23 01:13:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:29.083144 | orchestrator | 2025-11-23 01:13:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:29.083292 | orchestrator | 2025-11-23 01:13:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:32.122321 | orchestrator | 2025-11-23 01:13:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:32.122423 | orchestrator | 2025-11-23 01:13:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:35.160506 | orchestrator | 2025-11-23 01:13:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:35.160572 | orchestrator | 2025-11-23 01:13:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:38.207409 | orchestrator | 2025-11-23 01:13:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:38.207506 | orchestrator | 2025-11-23 01:13:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:41.247055 | orchestrator | 2025-11-23 01:13:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:41.247139 | orchestrator | 2025-11-23 01:13:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:44.284889 | orchestrator | 2025-11-23 01:13:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:44.285009 | orchestrator | 2025-11-23 01:13:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:47.326173 | orchestrator | 2025-11-23 01:13:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:47.326304 | orchestrator | 2025-11-23 01:13:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:50.369913 | orchestrator | 2025-11-23 01:13:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:50.370076 | orchestrator | 2025-11-23 01:13:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:53.412278 | orchestrator | 2025-11-23 01:13:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:53.412382 | orchestrator | 2025-11-23 01:13:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:56.450571 | orchestrator | 2025-11-23 01:13:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:56.450692 | orchestrator | 2025-11-23 01:13:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:13:59.484951 | orchestrator | 2025-11-23 01:13:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:13:59.485018 | orchestrator | 2025-11-23 01:13:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:02.527117 | orchestrator | 2025-11-23 01:14:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:02.527199 | orchestrator | 2025-11-23 01:14:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:05.567936 | orchestrator | 2025-11-23 01:14:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:05.568009 | orchestrator | 2025-11-23 01:14:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:08.610158 | orchestrator | 2025-11-23 01:14:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:08.610206 | orchestrator | 2025-11-23 01:14:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:11.648751 | orchestrator | 2025-11-23 01:14:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:11.648872 | orchestrator | 2025-11-23 01:14:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:14.688179 | orchestrator | 2025-11-23 01:14:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:14.688360 | orchestrator | 2025-11-23 01:14:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:17.731028 | orchestrator | 2025-11-23 01:14:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:17.731129 | orchestrator | 2025-11-23 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:20.771719 | orchestrator | 2025-11-23 01:14:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:20.771831 | orchestrator | 2025-11-23 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:23.812063 | orchestrator | 2025-11-23 01:14:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:23.812196 | orchestrator | 2025-11-23 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:26.853838 | orchestrator | 2025-11-23 01:14:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:26.853977 | orchestrator | 2025-11-23 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:29.893181 | orchestrator | 2025-11-23 01:14:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:29.893384 | orchestrator | 2025-11-23 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:32.930414 | orchestrator | 2025-11-23 01:14:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:32.930499 | orchestrator | 2025-11-23 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:35.969780 | orchestrator | 2025-11-23 01:14:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:35.969886 | orchestrator | 2025-11-23 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:39.017669 | orchestrator | 2025-11-23 01:14:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:39.017798 | orchestrator | 2025-11-23 01:14:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:42.063220 | orchestrator | 2025-11-23 01:14:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:42.063370 | orchestrator | 2025-11-23 01:14:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:45.104225 | orchestrator | 2025-11-23 01:14:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:45.104352 | orchestrator | 2025-11-23 01:14:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:48.147350 | orchestrator | 2025-11-23 01:14:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:48.147485 | orchestrator | 2025-11-23 01:14:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:51.192872 | orchestrator | 2025-11-23 01:14:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:51.192979 | orchestrator | 2025-11-23 01:14:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:54.231206 | orchestrator | 2025-11-23 01:14:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:54.231363 | orchestrator | 2025-11-23 01:14:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:14:57.267204 | orchestrator | 2025-11-23 01:14:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:14:57.267385 | orchestrator | 2025-11-23 01:14:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:00.305764 | orchestrator | 2025-11-23 01:15:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:00.305856 | orchestrator | 2025-11-23 01:15:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:03.350561 | orchestrator | 2025-11-23 01:15:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:03.350659 | orchestrator | 2025-11-23 01:15:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:06.387008 | orchestrator | 2025-11-23 01:15:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:06.387107 | orchestrator | 2025-11-23 01:15:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:09.427790 | orchestrator | 2025-11-23 01:15:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:09.427897 | orchestrator | 2025-11-23 01:15:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:12.466067 | orchestrator | 2025-11-23 01:15:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:12.466149 | orchestrator | 2025-11-23 01:15:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:15.508480 | orchestrator | 2025-11-23 01:15:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:15.508571 | orchestrator | 2025-11-23 01:15:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:18.541440 | orchestrator | 2025-11-23 01:15:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:18.541546 | orchestrator | 2025-11-23 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:21.579476 | orchestrator | 2025-11-23 01:15:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:21.579590 | orchestrator | 2025-11-23 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:24.615770 | orchestrator | 2025-11-23 01:15:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:24.615875 | orchestrator | 2025-11-23 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:27.656707 | orchestrator | 2025-11-23 01:15:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:27.656801 | orchestrator | 2025-11-23 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:30.695649 | orchestrator | 2025-11-23 01:15:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:30.695724 | orchestrator | 2025-11-23 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:33.738774 | orchestrator | 2025-11-23 01:15:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:33.738878 | orchestrator | 2025-11-23 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:36.778797 | orchestrator | 2025-11-23 01:15:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:36.778884 | orchestrator | 2025-11-23 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:39.818763 | orchestrator | 2025-11-23 01:15:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:39.818875 | orchestrator | 2025-11-23 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:42.859711 | orchestrator | 2025-11-23 01:15:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:42.859811 | orchestrator | 2025-11-23 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:45.902474 | orchestrator | 2025-11-23 01:15:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:45.902545 | orchestrator | 2025-11-23 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:48.948793 | orchestrator | 2025-11-23 01:15:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:48.948895 | orchestrator | 2025-11-23 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:51.989357 | orchestrator | 2025-11-23 01:15:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:51.989434 | orchestrator | 2025-11-23 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:55.031145 | orchestrator | 2025-11-23 01:15:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:55.031236 | orchestrator | 2025-11-23 01:15:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:15:58.075498 | orchestrator | 2025-11-23 01:15:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:15:58.075670 | orchestrator | 2025-11-23 01:15:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:01.117091 | orchestrator | 2025-11-23 01:16:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:01.117200 | orchestrator | 2025-11-23 01:16:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:04.160888 | orchestrator | 2025-11-23 01:16:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:04.160985 | orchestrator | 2025-11-23 01:16:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:07.197850 | orchestrator | 2025-11-23 01:16:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:07.197980 | orchestrator | 2025-11-23 01:16:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:10.231981 | orchestrator | 2025-11-23 01:16:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:10.232090 | orchestrator | 2025-11-23 01:16:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:13.269772 | orchestrator | 2025-11-23 01:16:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:13.269871 | orchestrator | 2025-11-23 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:16.316283 | orchestrator | 2025-11-23 01:16:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:16.316436 | orchestrator | 2025-11-23 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:19.357064 | orchestrator | 2025-11-23 01:16:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:19.357167 | orchestrator | 2025-11-23 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:22.398688 | orchestrator | 2025-11-23 01:16:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:22.398789 | orchestrator | 2025-11-23 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:25.439777 | orchestrator | 2025-11-23 01:16:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:25.439882 | orchestrator | 2025-11-23 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:28.479573 | orchestrator | 2025-11-23 01:16:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:28.479645 | orchestrator | 2025-11-23 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:31.515570 | orchestrator | 2025-11-23 01:16:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:31.515673 | orchestrator | 2025-11-23 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:34.553456 | orchestrator | 2025-11-23 01:16:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:34.553566 | orchestrator | 2025-11-23 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:37.593803 | orchestrator | 2025-11-23 01:16:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:37.593898 | orchestrator | 2025-11-23 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:40.631646 | orchestrator | 2025-11-23 01:16:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:40.631750 | orchestrator | 2025-11-23 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:43.677596 | orchestrator | 2025-11-23 01:16:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:43.677760 | orchestrator | 2025-11-23 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:46.720304 | orchestrator | 2025-11-23 01:16:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:46.720444 | orchestrator | 2025-11-23 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:49.763661 | orchestrator | 2025-11-23 01:16:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:49.763742 | orchestrator | 2025-11-23 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:52.808028 | orchestrator | 2025-11-23 01:16:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:52.808114 | orchestrator | 2025-11-23 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:55.847835 | orchestrator | 2025-11-23 01:16:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:55.847920 | orchestrator | 2025-11-23 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:16:58.890169 | orchestrator | 2025-11-23 01:16:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:16:58.890270 | orchestrator | 2025-11-23 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:01.932855 | orchestrator | 2025-11-23 01:17:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:01.932968 | orchestrator | 2025-11-23 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:04.972880 | orchestrator | 2025-11-23 01:17:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:04.973799 | orchestrator | 2025-11-23 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:08.012134 | orchestrator | 2025-11-23 01:17:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:08.012228 | orchestrator | 2025-11-23 01:17:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:11.053145 | orchestrator | 2025-11-23 01:17:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:11.053267 | orchestrator | 2025-11-23 01:17:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:14.095243 | orchestrator | 2025-11-23 01:17:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:14.095320 | orchestrator | 2025-11-23 01:17:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:17.137126 | orchestrator | 2025-11-23 01:17:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:17.137235 | orchestrator | 2025-11-23 01:17:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:20.177405 | orchestrator | 2025-11-23 01:17:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:20.177506 | orchestrator | 2025-11-23 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:23.226125 | orchestrator | 2025-11-23 01:17:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:23.226211 | orchestrator | 2025-11-23 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:26.262570 | orchestrator | 2025-11-23 01:17:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:26.262700 | orchestrator | 2025-11-23 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:29.304753 | orchestrator | 2025-11-23 01:17:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:29.304890 | orchestrator | 2025-11-23 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:32.346181 | orchestrator | 2025-11-23 01:17:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:32.346291 | orchestrator | 2025-11-23 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:35.388565 | orchestrator | 2025-11-23 01:17:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:35.388666 | orchestrator | 2025-11-23 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:38.429682 | orchestrator | 2025-11-23 01:17:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:38.429792 | orchestrator | 2025-11-23 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:41.473456 | orchestrator | 2025-11-23 01:17:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:41.473556 | orchestrator | 2025-11-23 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:44.515407 | orchestrator | 2025-11-23 01:17:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:44.515490 | orchestrator | 2025-11-23 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:47.553640 | orchestrator | 2025-11-23 01:17:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:47.553737 | orchestrator | 2025-11-23 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:50.592863 | orchestrator | 2025-11-23 01:17:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:50.592967 | orchestrator | 2025-11-23 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:53.635595 | orchestrator | 2025-11-23 01:17:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:53.635725 | orchestrator | 2025-11-23 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:56.677897 | orchestrator | 2025-11-23 01:17:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:56.678071 | orchestrator | 2025-11-23 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:17:59.719540 | orchestrator | 2025-11-23 01:17:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:17:59.719654 | orchestrator | 2025-11-23 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:02.759576 | orchestrator | 2025-11-23 01:18:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:02.759686 | orchestrator | 2025-11-23 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:05.801754 | orchestrator | 2025-11-23 01:18:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:05.801854 | orchestrator | 2025-11-23 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:08.840413 | orchestrator | 2025-11-23 01:18:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:08.840510 | orchestrator | 2025-11-23 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:11.883791 | orchestrator | 2025-11-23 01:18:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:11.884034 | orchestrator | 2025-11-23 01:18:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:14.929828 | orchestrator | 2025-11-23 01:18:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:14.930157 | orchestrator | 2025-11-23 01:18:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:17.970466 | orchestrator | 2025-11-23 01:18:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:17.970571 | orchestrator | 2025-11-23 01:18:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:21.013115 | orchestrator | 2025-11-23 01:18:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:21.013223 | orchestrator | 2025-11-23 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:24.050736 | orchestrator | 2025-11-23 01:18:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:24.050837 | orchestrator | 2025-11-23 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:27.093186 | orchestrator | 2025-11-23 01:18:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:27.093290 | orchestrator | 2025-11-23 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:30.133922 | orchestrator | 2025-11-23 01:18:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:30.134067 | orchestrator | 2025-11-23 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:33.172815 | orchestrator | 2025-11-23 01:18:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:33.172940 | orchestrator | 2025-11-23 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:36.214457 | orchestrator | 2025-11-23 01:18:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:36.214554 | orchestrator | 2025-11-23 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:39.259733 | orchestrator | 2025-11-23 01:18:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:39.259839 | orchestrator | 2025-11-23 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:42.302342 | orchestrator | 2025-11-23 01:18:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:42.302509 | orchestrator | 2025-11-23 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:45.344135 | orchestrator | 2025-11-23 01:18:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:45.344219 | orchestrator | 2025-11-23 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:48.388328 | orchestrator | 2025-11-23 01:18:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:48.388494 | orchestrator | 2025-11-23 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:51.432865 | orchestrator | 2025-11-23 01:18:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:51.432970 | orchestrator | 2025-11-23 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:54.476091 | orchestrator | 2025-11-23 01:18:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:54.476255 | orchestrator | 2025-11-23 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:18:57.516158 | orchestrator | 2025-11-23 01:18:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:18:57.516266 | orchestrator | 2025-11-23 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:00.557505 | orchestrator | 2025-11-23 01:19:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:00.557602 | orchestrator | 2025-11-23 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:03.598605 | orchestrator | 2025-11-23 01:19:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:03.598719 | orchestrator | 2025-11-23 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:06.641315 | orchestrator | 2025-11-23 01:19:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:06.641432 | orchestrator | 2025-11-23 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:09.681984 | orchestrator | 2025-11-23 01:19:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:09.682125 | orchestrator | 2025-11-23 01:19:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:12.721076 | orchestrator | 2025-11-23 01:19:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:12.721167 | orchestrator | 2025-11-23 01:19:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:15.759747 | orchestrator | 2025-11-23 01:19:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:15.759828 | orchestrator | 2025-11-23 01:19:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:18.796933 | orchestrator | 2025-11-23 01:19:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:18.797042 | orchestrator | 2025-11-23 01:19:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:21.837680 | orchestrator | 2025-11-23 01:19:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:21.837934 | orchestrator | 2025-11-23 01:19:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:24.880614 | orchestrator | 2025-11-23 01:19:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:24.880725 | orchestrator | 2025-11-23 01:19:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:27.917368 | orchestrator | 2025-11-23 01:19:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:27.917626 | orchestrator | 2025-11-23 01:19:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:30.963312 | orchestrator | 2025-11-23 01:19:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:30.963475 | orchestrator | 2025-11-23 01:19:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:34.004693 | orchestrator | 2025-11-23 01:19:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:34.004819 | orchestrator | 2025-11-23 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:37.048195 | orchestrator | 2025-11-23 01:19:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:37.048322 | orchestrator | 2025-11-23 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:40.085916 | orchestrator | 2025-11-23 01:19:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:40.086062 | orchestrator | 2025-11-23 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:43.120270 | orchestrator | 2025-11-23 01:19:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:43.120382 | orchestrator | 2025-11-23 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:46.159748 | orchestrator | 2025-11-23 01:19:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:46.159910 | orchestrator | 2025-11-23 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:49.201368 | orchestrator | 2025-11-23 01:19:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:49.201498 | orchestrator | 2025-11-23 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:52.244520 | orchestrator | 2025-11-23 01:19:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:52.244652 | orchestrator | 2025-11-23 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:55.281933 | orchestrator | 2025-11-23 01:19:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:55.282094 | orchestrator | 2025-11-23 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:19:58.326982 | orchestrator | 2025-11-23 01:19:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:19:58.327145 | orchestrator | 2025-11-23 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:01.364320 | orchestrator | 2025-11-23 01:20:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:01.364528 | orchestrator | 2025-11-23 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:04.404120 | orchestrator | 2025-11-23 01:20:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:04.404237 | orchestrator | 2025-11-23 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:07.442334 | orchestrator | 2025-11-23 01:20:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:07.442529 | orchestrator | 2025-11-23 01:20:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:10.483092 | orchestrator | 2025-11-23 01:20:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:10.483160 | orchestrator | 2025-11-23 01:20:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:13.529191 | orchestrator | 2025-11-23 01:20:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:13.529293 | orchestrator | 2025-11-23 01:20:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:16.565793 | orchestrator | 2025-11-23 01:20:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:16.565868 | orchestrator | 2025-11-23 01:20:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:19.602276 | orchestrator | 2025-11-23 01:20:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:19.602373 | orchestrator | 2025-11-23 01:20:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:22.639124 | orchestrator | 2025-11-23 01:20:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:22.639243 | orchestrator | 2025-11-23 01:20:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:25.678191 | orchestrator | 2025-11-23 01:20:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:25.678492 | orchestrator | 2025-11-23 01:20:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:28.724409 | orchestrator | 2025-11-23 01:20:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:28.724617 | orchestrator | 2025-11-23 01:20:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:31.760452 | orchestrator | 2025-11-23 01:20:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:31.760572 | orchestrator | 2025-11-23 01:20:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:34.802185 | orchestrator | 2025-11-23 01:20:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:34.802328 | orchestrator | 2025-11-23 01:20:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:37.841600 | orchestrator | 2025-11-23 01:20:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:37.841704 | orchestrator | 2025-11-23 01:20:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:40.880975 | orchestrator | 2025-11-23 01:20:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:40.881072 | orchestrator | 2025-11-23 01:20:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:43.923422 | orchestrator | 2025-11-23 01:20:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:43.923569 | orchestrator | 2025-11-23 01:20:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:46.961952 | orchestrator | 2025-11-23 01:20:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:46.962218 | orchestrator | 2025-11-23 01:20:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:50.000590 | orchestrator | 2025-11-23 01:20:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:50.000695 | orchestrator | 2025-11-23 01:20:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:53.050501 | orchestrator | 2025-11-23 01:20:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:53.050711 | orchestrator | 2025-11-23 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:56.091064 | orchestrator | 2025-11-23 01:20:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:56.091171 | orchestrator | 2025-11-23 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:20:59.130105 | orchestrator | 2025-11-23 01:20:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:20:59.130209 | orchestrator | 2025-11-23 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:02.166191 | orchestrator | 2025-11-23 01:21:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:02.166286 | orchestrator | 2025-11-23 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:05.208007 | orchestrator | 2025-11-23 01:21:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:05.208134 | orchestrator | 2025-11-23 01:21:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:08.253427 | orchestrator | 2025-11-23 01:21:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:08.253544 | orchestrator | 2025-11-23 01:21:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:11.294644 | orchestrator | 2025-11-23 01:21:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:11.294744 | orchestrator | 2025-11-23 01:21:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:14.337753 | orchestrator | 2025-11-23 01:21:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:14.337852 | orchestrator | 2025-11-23 01:21:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:17.377596 | orchestrator | 2025-11-23 01:21:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:17.377746 | orchestrator | 2025-11-23 01:21:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:20.417153 | orchestrator | 2025-11-23 01:21:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:20.417358 | orchestrator | 2025-11-23 01:21:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:23.458293 | orchestrator | 2025-11-23 01:21:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:23.458398 | orchestrator | 2025-11-23 01:21:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:26.496825 | orchestrator | 2025-11-23 01:21:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:26.496948 | orchestrator | 2025-11-23 01:21:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:29.533741 | orchestrator | 2025-11-23 01:21:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:29.533845 | orchestrator | 2025-11-23 01:21:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:32.564719 | orchestrator | 2025-11-23 01:21:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:32.564824 | orchestrator | 2025-11-23 01:21:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:35.609027 | orchestrator | 2025-11-23 01:21:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:35.609133 | orchestrator | 2025-11-23 01:21:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:38.651593 | orchestrator | 2025-11-23 01:21:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:38.651699 | orchestrator | 2025-11-23 01:21:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:41.689724 | orchestrator | 2025-11-23 01:21:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:41.689834 | orchestrator | 2025-11-23 01:21:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:44.735421 | orchestrator | 2025-11-23 01:21:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:44.735611 | orchestrator | 2025-11-23 01:21:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:47.778656 | orchestrator | 2025-11-23 01:21:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:47.778772 | orchestrator | 2025-11-23 01:21:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:50.818640 | orchestrator | 2025-11-23 01:21:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:50.818815 | orchestrator | 2025-11-23 01:21:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:53.859376 | orchestrator | 2025-11-23 01:21:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:53.859570 | orchestrator | 2025-11-23 01:21:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:56.896817 | orchestrator | 2025-11-23 01:21:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:56.896919 | orchestrator | 2025-11-23 01:21:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:21:59.933777 | orchestrator | 2025-11-23 01:21:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:21:59.933881 | orchestrator | 2025-11-23 01:21:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:02.971967 | orchestrator | 2025-11-23 01:22:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:02.972167 | orchestrator | 2025-11-23 01:22:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:06.014383 | orchestrator | 2025-11-23 01:22:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:06.014513 | orchestrator | 2025-11-23 01:22:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:09.047699 | orchestrator | 2025-11-23 01:22:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:09.047817 | orchestrator | 2025-11-23 01:22:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:12.080677 | orchestrator | 2025-11-23 01:22:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:12.080843 | orchestrator | 2025-11-23 01:22:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:15.120105 | orchestrator | 2025-11-23 01:22:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:15.120210 | orchestrator | 2025-11-23 01:22:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:18.161727 | orchestrator | 2025-11-23 01:22:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:18.161838 | orchestrator | 2025-11-23 01:22:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:21.204365 | orchestrator | 2025-11-23 01:22:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:21.204490 | orchestrator | 2025-11-23 01:22:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:24.246346 | orchestrator | 2025-11-23 01:22:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:24.246445 | orchestrator | 2025-11-23 01:22:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:27.283333 | orchestrator | 2025-11-23 01:22:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:27.283446 | orchestrator | 2025-11-23 01:22:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:30.328277 | orchestrator | 2025-11-23 01:22:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:30.328389 | orchestrator | 2025-11-23 01:22:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:33.375610 | orchestrator | 2025-11-23 01:22:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:33.375846 | orchestrator | 2025-11-23 01:22:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:36.416646 | orchestrator | 2025-11-23 01:22:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:36.416765 | orchestrator | 2025-11-23 01:22:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:39.459061 | orchestrator | 2025-11-23 01:22:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:39.459167 | orchestrator | 2025-11-23 01:22:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:42.495934 | orchestrator | 2025-11-23 01:22:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:42.496036 | orchestrator | 2025-11-23 01:22:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:45.534302 | orchestrator | 2025-11-23 01:22:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:45.534398 | orchestrator | 2025-11-23 01:22:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:48.577796 | orchestrator | 2025-11-23 01:22:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:48.577951 | orchestrator | 2025-11-23 01:22:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:51.613646 | orchestrator | 2025-11-23 01:22:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:51.613739 | orchestrator | 2025-11-23 01:22:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:54.652126 | orchestrator | 2025-11-23 01:22:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:54.652209 | orchestrator | 2025-11-23 01:22:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:22:57.690318 | orchestrator | 2025-11-23 01:22:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:22:57.690391 | orchestrator | 2025-11-23 01:22:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:00.728331 | orchestrator | 2025-11-23 01:23:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:00.728435 | orchestrator | 2025-11-23 01:23:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:03.767036 | orchestrator | 2025-11-23 01:23:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:03.767141 | orchestrator | 2025-11-23 01:23:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:06.807286 | orchestrator | 2025-11-23 01:23:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:06.807404 | orchestrator | 2025-11-23 01:23:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:09.845472 | orchestrator | 2025-11-23 01:23:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:09.845627 | orchestrator | 2025-11-23 01:23:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:12.885264 | orchestrator | 2025-11-23 01:23:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:12.885365 | orchestrator | 2025-11-23 01:23:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:15.923915 | orchestrator | 2025-11-23 01:23:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:15.924004 | orchestrator | 2025-11-23 01:23:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:18.961640 | orchestrator | 2025-11-23 01:23:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:18.961730 | orchestrator | 2025-11-23 01:23:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:22.000654 | orchestrator | 2025-11-23 01:23:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:22.000756 | orchestrator | 2025-11-23 01:23:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:25.042112 | orchestrator | 2025-11-23 01:23:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:25.042204 | orchestrator | 2025-11-23 01:23:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:28.084280 | orchestrator | 2025-11-23 01:23:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:28.084431 | orchestrator | 2025-11-23 01:23:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:31.121813 | orchestrator | 2025-11-23 01:23:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:31.121920 | orchestrator | 2025-11-23 01:23:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:34.157314 | orchestrator | 2025-11-23 01:23:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:34.157472 | orchestrator | 2025-11-23 01:23:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:37.193910 | orchestrator | 2025-11-23 01:23:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:37.194072 | orchestrator | 2025-11-23 01:23:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:40.230636 | orchestrator | 2025-11-23 01:23:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:40.230785 | orchestrator | 2025-11-23 01:23:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:43.275926 | orchestrator | 2025-11-23 01:23:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:43.276228 | orchestrator | 2025-11-23 01:23:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:46.309143 | orchestrator | 2025-11-23 01:23:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:46.309242 | orchestrator | 2025-11-23 01:23:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:49.357947 | orchestrator | 2025-11-23 01:23:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:49.358102 | orchestrator | 2025-11-23 01:23:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:52.399341 | orchestrator | 2025-11-23 01:23:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:52.399410 | orchestrator | 2025-11-23 01:23:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:55.437932 | orchestrator | 2025-11-23 01:23:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:55.438318 | orchestrator | 2025-11-23 01:23:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:23:58.476974 | orchestrator | 2025-11-23 01:23:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:23:58.477082 | orchestrator | 2025-11-23 01:23:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:01.516408 | orchestrator | 2025-11-23 01:24:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:01.516503 | orchestrator | 2025-11-23 01:24:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:04.555500 | orchestrator | 2025-11-23 01:24:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:04.555697 | orchestrator | 2025-11-23 01:24:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:07.595131 | orchestrator | 2025-11-23 01:24:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:07.595245 | orchestrator | 2025-11-23 01:24:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:10.636143 | orchestrator | 2025-11-23 01:24:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:10.636256 | orchestrator | 2025-11-23 01:24:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:13.676224 | orchestrator | 2025-11-23 01:24:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:13.676333 | orchestrator | 2025-11-23 01:24:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:16.712686 | orchestrator | 2025-11-23 01:24:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:16.712792 | orchestrator | 2025-11-23 01:24:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:19.755354 | orchestrator | 2025-11-23 01:24:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:19.755494 | orchestrator | 2025-11-23 01:24:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:22.795978 | orchestrator | 2025-11-23 01:24:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:22.796112 | orchestrator | 2025-11-23 01:24:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:25.837345 | orchestrator | 2025-11-23 01:24:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:25.837445 | orchestrator | 2025-11-23 01:24:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:28.881202 | orchestrator | 2025-11-23 01:24:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:28.881303 | orchestrator | 2025-11-23 01:24:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:31.922262 | orchestrator | 2025-11-23 01:24:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:31.922343 | orchestrator | 2025-11-23 01:24:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:34.965283 | orchestrator | 2025-11-23 01:24:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:34.965405 | orchestrator | 2025-11-23 01:24:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:38.005897 | orchestrator | 2025-11-23 01:24:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:38.005993 | orchestrator | 2025-11-23 01:24:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:41.048147 | orchestrator | 2025-11-23 01:24:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:41.048242 | orchestrator | 2025-11-23 01:24:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:44.091251 | orchestrator | 2025-11-23 01:24:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:44.091317 | orchestrator | 2025-11-23 01:24:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:47.132417 | orchestrator | 2025-11-23 01:24:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:47.132526 | orchestrator | 2025-11-23 01:24:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:50.170575 | orchestrator | 2025-11-23 01:24:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:50.170735 | orchestrator | 2025-11-23 01:24:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:53.217469 | orchestrator | 2025-11-23 01:24:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:53.217550 | orchestrator | 2025-11-23 01:24:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:56.256224 | orchestrator | 2025-11-23 01:24:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:56.256350 | orchestrator | 2025-11-23 01:24:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:24:59.295218 | orchestrator | 2025-11-23 01:24:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:24:59.295372 | orchestrator | 2025-11-23 01:24:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:02.336573 | orchestrator | 2025-11-23 01:25:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:02.336689 | orchestrator | 2025-11-23 01:25:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:05.379276 | orchestrator | 2025-11-23 01:25:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:05.379512 | orchestrator | 2025-11-23 01:25:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:08.421200 | orchestrator | 2025-11-23 01:25:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:08.421303 | orchestrator | 2025-11-23 01:25:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:11.459903 | orchestrator | 2025-11-23 01:25:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:11.460070 | orchestrator | 2025-11-23 01:25:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:14.501059 | orchestrator | 2025-11-23 01:25:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:14.501192 | orchestrator | 2025-11-23 01:25:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:17.541818 | orchestrator | 2025-11-23 01:25:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:17.541953 | orchestrator | 2025-11-23 01:25:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:20.574721 | orchestrator | 2025-11-23 01:25:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:20.574831 | orchestrator | 2025-11-23 01:25:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:23.621886 | orchestrator | 2025-11-23 01:25:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:23.621986 | orchestrator | 2025-11-23 01:25:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:26.658842 | orchestrator | 2025-11-23 01:25:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:26.658940 | orchestrator | 2025-11-23 01:25:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:29.699017 | orchestrator | 2025-11-23 01:25:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:29.699148 | orchestrator | 2025-11-23 01:25:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:32.732982 | orchestrator | 2025-11-23 01:25:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:32.733081 | orchestrator | 2025-11-23 01:25:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:35.773077 | orchestrator | 2025-11-23 01:25:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:35.773176 | orchestrator | 2025-11-23 01:25:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:38.815721 | orchestrator | 2025-11-23 01:25:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:38.815842 | orchestrator | 2025-11-23 01:25:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:41.852489 | orchestrator | 2025-11-23 01:25:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:41.852685 | orchestrator | 2025-11-23 01:25:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:44.893773 | orchestrator | 2025-11-23 01:25:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:44.893875 | orchestrator | 2025-11-23 01:25:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:47.932787 | orchestrator | 2025-11-23 01:25:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:47.932889 | orchestrator | 2025-11-23 01:25:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:50.968436 | orchestrator | 2025-11-23 01:25:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:50.968514 | orchestrator | 2025-11-23 01:25:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:54.008746 | orchestrator | 2025-11-23 01:25:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:54.008855 | orchestrator | 2025-11-23 01:25:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:25:57.050156 | orchestrator | 2025-11-23 01:25:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:25:57.050279 | orchestrator | 2025-11-23 01:25:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:00.089816 | orchestrator | 2025-11-23 01:26:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:00.089953 | orchestrator | 2025-11-23 01:26:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:03.124716 | orchestrator | 2025-11-23 01:26:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:03.124876 | orchestrator | 2025-11-23 01:26:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:06.166872 | orchestrator | 2025-11-23 01:26:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:06.166976 | orchestrator | 2025-11-23 01:26:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:09.209237 | orchestrator | 2025-11-23 01:26:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:09.209334 | orchestrator | 2025-11-23 01:26:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:12.244921 | orchestrator | 2025-11-23 01:26:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:12.245048 | orchestrator | 2025-11-23 01:26:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:15.284238 | orchestrator | 2025-11-23 01:26:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:15.284323 | orchestrator | 2025-11-23 01:26:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:18.321645 | orchestrator | 2025-11-23 01:26:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:18.321754 | orchestrator | 2025-11-23 01:26:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:21.359709 | orchestrator | 2025-11-23 01:26:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:21.359813 | orchestrator | 2025-11-23 01:26:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:24.400348 | orchestrator | 2025-11-23 01:26:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:24.400438 | orchestrator | 2025-11-23 01:26:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:27.440039 | orchestrator | 2025-11-23 01:26:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:27.440177 | orchestrator | 2025-11-23 01:26:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:30.471800 | orchestrator | 2025-11-23 01:26:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:30.471903 | orchestrator | 2025-11-23 01:26:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:33.515073 | orchestrator | 2025-11-23 01:26:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:33.515190 | orchestrator | 2025-11-23 01:26:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:36.555434 | orchestrator | 2025-11-23 01:26:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:36.555509 | orchestrator | 2025-11-23 01:26:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:39.596364 | orchestrator | 2025-11-23 01:26:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:39.596450 | orchestrator | 2025-11-23 01:26:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:42.631608 | orchestrator | 2025-11-23 01:26:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:42.631775 | orchestrator | 2025-11-23 01:26:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:45.668029 | orchestrator | 2025-11-23 01:26:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:45.668123 | orchestrator | 2025-11-23 01:26:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:48.711079 | orchestrator | 2025-11-23 01:26:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:48.711198 | orchestrator | 2025-11-23 01:26:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:51.751121 | orchestrator | 2025-11-23 01:26:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:51.751287 | orchestrator | 2025-11-23 01:26:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:54.790561 | orchestrator | 2025-11-23 01:26:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:54.790635 | orchestrator | 2025-11-23 01:26:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:26:57.828178 | orchestrator | 2025-11-23 01:26:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:26:57.828286 | orchestrator | 2025-11-23 01:26:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:00.865262 | orchestrator | 2025-11-23 01:27:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:00.865345 | orchestrator | 2025-11-23 01:27:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:03.902809 | orchestrator | 2025-11-23 01:27:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:03.902882 | orchestrator | 2025-11-23 01:27:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:06.940189 | orchestrator | 2025-11-23 01:27:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:06.940438 | orchestrator | 2025-11-23 01:27:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:09.980998 | orchestrator | 2025-11-23 01:27:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:09.981104 | orchestrator | 2025-11-23 01:27:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:13.017923 | orchestrator | 2025-11-23 01:27:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:13.018090 | orchestrator | 2025-11-23 01:27:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:16.058330 | orchestrator | 2025-11-23 01:27:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:16.058448 | orchestrator | 2025-11-23 01:27:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:19.098396 | orchestrator | 2025-11-23 01:27:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:19.098502 | orchestrator | 2025-11-23 01:27:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:22.131956 | orchestrator | 2025-11-23 01:27:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:22.132059 | orchestrator | 2025-11-23 01:27:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:25.170373 | orchestrator | 2025-11-23 01:27:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:25.170475 | orchestrator | 2025-11-23 01:27:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:28.207710 | orchestrator | 2025-11-23 01:27:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:28.207836 | orchestrator | 2025-11-23 01:27:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:31.245510 | orchestrator | 2025-11-23 01:27:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:31.245620 | orchestrator | 2025-11-23 01:27:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:34.287410 | orchestrator | 2025-11-23 01:27:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:34.287513 | orchestrator | 2025-11-23 01:27:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:37.329494 | orchestrator | 2025-11-23 01:27:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:37.329610 | orchestrator | 2025-11-23 01:27:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:40.370861 | orchestrator | 2025-11-23 01:27:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:40.370982 | orchestrator | 2025-11-23 01:27:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:43.412652 | orchestrator | 2025-11-23 01:27:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:43.412806 | orchestrator | 2025-11-23 01:27:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:46.457334 | orchestrator | 2025-11-23 01:27:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:46.457457 | orchestrator | 2025-11-23 01:27:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:49.495842 | orchestrator | 2025-11-23 01:27:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:49.495943 | orchestrator | 2025-11-23 01:27:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:52.533485 | orchestrator | 2025-11-23 01:27:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:52.533703 | orchestrator | 2025-11-23 01:27:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:55.573551 | orchestrator | 2025-11-23 01:27:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:55.573654 | orchestrator | 2025-11-23 01:27:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:27:58.615516 | orchestrator | 2025-11-23 01:27:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:27:58.615625 | orchestrator | 2025-11-23 01:27:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:01.651331 | orchestrator | 2025-11-23 01:28:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:01.651432 | orchestrator | 2025-11-23 01:28:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:04.688998 | orchestrator | 2025-11-23 01:28:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:04.689130 | orchestrator | 2025-11-23 01:28:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:07.732026 | orchestrator | 2025-11-23 01:28:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:07.732107 | orchestrator | 2025-11-23 01:28:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:10.775913 | orchestrator | 2025-11-23 01:28:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:10.776024 | orchestrator | 2025-11-23 01:28:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:13.815981 | orchestrator | 2025-11-23 01:28:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:13.816078 | orchestrator | 2025-11-23 01:28:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:16.857139 | orchestrator | 2025-11-23 01:28:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:16.857245 | orchestrator | 2025-11-23 01:28:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:19.899631 | orchestrator | 2025-11-23 01:28:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:19.899801 | orchestrator | 2025-11-23 01:28:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:22.931454 | orchestrator | 2025-11-23 01:28:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:22.931568 | orchestrator | 2025-11-23 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:25.973881 | orchestrator | 2025-11-23 01:28:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:25.973972 | orchestrator | 2025-11-23 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:29.015413 | orchestrator | 2025-11-23 01:28:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:29.015501 | orchestrator | 2025-11-23 01:28:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:32.055819 | orchestrator | 2025-11-23 01:28:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:32.055989 | orchestrator | 2025-11-23 01:28:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:35.101298 | orchestrator | 2025-11-23 01:28:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:35.101399 | orchestrator | 2025-11-23 01:28:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:38.142209 | orchestrator | 2025-11-23 01:28:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:38.142324 | orchestrator | 2025-11-23 01:28:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:41.181971 | orchestrator | 2025-11-23 01:28:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:41.182101 | orchestrator | 2025-11-23 01:28:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:44.222445 | orchestrator | 2025-11-23 01:28:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:44.222568 | orchestrator | 2025-11-23 01:28:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:47.261251 | orchestrator | 2025-11-23 01:28:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:47.261373 | orchestrator | 2025-11-23 01:28:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:50.299221 | orchestrator | 2025-11-23 01:28:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:50.299341 | orchestrator | 2025-11-23 01:28:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:53.338303 | orchestrator | 2025-11-23 01:28:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:53.338426 | orchestrator | 2025-11-23 01:28:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:56.378874 | orchestrator | 2025-11-23 01:28:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:56.378978 | orchestrator | 2025-11-23 01:28:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:28:59.414386 | orchestrator | 2025-11-23 01:28:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:28:59.414509 | orchestrator | 2025-11-23 01:28:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:02.452463 | orchestrator | 2025-11-23 01:29:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:02.452586 | orchestrator | 2025-11-23 01:29:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:05.495767 | orchestrator | 2025-11-23 01:29:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:05.495872 | orchestrator | 2025-11-23 01:29:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:08.540089 | orchestrator | 2025-11-23 01:29:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:08.540195 | orchestrator | 2025-11-23 01:29:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:11.580955 | orchestrator | 2025-11-23 01:29:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:11.581054 | orchestrator | 2025-11-23 01:29:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:14.621199 | orchestrator | 2025-11-23 01:29:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:14.621289 | orchestrator | 2025-11-23 01:29:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:17.660604 | orchestrator | 2025-11-23 01:29:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:17.660761 | orchestrator | 2025-11-23 01:29:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:20.699815 | orchestrator | 2025-11-23 01:29:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:20.699927 | orchestrator | 2025-11-23 01:29:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:23.736758 | orchestrator | 2025-11-23 01:29:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:23.736846 | orchestrator | 2025-11-23 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:26.773056 | orchestrator | 2025-11-23 01:29:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:26.773226 | orchestrator | 2025-11-23 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:29.823484 | orchestrator | 2025-11-23 01:29:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:29.823562 | orchestrator | 2025-11-23 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:32.864985 | orchestrator | 2025-11-23 01:29:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:32.865141 | orchestrator | 2025-11-23 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:35.906229 | orchestrator | 2025-11-23 01:29:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:35.906326 | orchestrator | 2025-11-23 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:38.944172 | orchestrator | 2025-11-23 01:29:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:38.944274 | orchestrator | 2025-11-23 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:41.985933 | orchestrator | 2025-11-23 01:29:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:41.986104 | orchestrator | 2025-11-23 01:29:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:45.027995 | orchestrator | 2025-11-23 01:29:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:45.028101 | orchestrator | 2025-11-23 01:29:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:48.067416 | orchestrator | 2025-11-23 01:29:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:48.067517 | orchestrator | 2025-11-23 01:29:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:51.107018 | orchestrator | 2025-11-23 01:29:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:51.107162 | orchestrator | 2025-11-23 01:29:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:54.147412 | orchestrator | 2025-11-23 01:29:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:54.147493 | orchestrator | 2025-11-23 01:29:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:29:57.184164 | orchestrator | 2025-11-23 01:29:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:29:57.184291 | orchestrator | 2025-11-23 01:29:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:00.227366 | orchestrator | 2025-11-23 01:30:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:00.227465 | orchestrator | 2025-11-23 01:30:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:03.262647 | orchestrator | 2025-11-23 01:30:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:03.262856 | orchestrator | 2025-11-23 01:30:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:06.304363 | orchestrator | 2025-11-23 01:30:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:06.304614 | orchestrator | 2025-11-23 01:30:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:09.347138 | orchestrator | 2025-11-23 01:30:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:09.347241 | orchestrator | 2025-11-23 01:30:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:12.383308 | orchestrator | 2025-11-23 01:30:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:12.383410 | orchestrator | 2025-11-23 01:30:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:15.427027 | orchestrator | 2025-11-23 01:30:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:15.427098 | orchestrator | 2025-11-23 01:30:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:18.467536 | orchestrator | 2025-11-23 01:30:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:18.467636 | orchestrator | 2025-11-23 01:30:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:21.513379 | orchestrator | 2025-11-23 01:30:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:21.513475 | orchestrator | 2025-11-23 01:30:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:24.552555 | orchestrator | 2025-11-23 01:30:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:24.552687 | orchestrator | 2025-11-23 01:30:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:27.594491 | orchestrator | 2025-11-23 01:30:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:27.594621 | orchestrator | 2025-11-23 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:30.634296 | orchestrator | 2025-11-23 01:30:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:30.634397 | orchestrator | 2025-11-23 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:33.674681 | orchestrator | 2025-11-23 01:30:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:33.674776 | orchestrator | 2025-11-23 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:36.715008 | orchestrator | 2025-11-23 01:30:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:36.715095 | orchestrator | 2025-11-23 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:39.755810 | orchestrator | 2025-11-23 01:30:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:39.755934 | orchestrator | 2025-11-23 01:30:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:42.797437 | orchestrator | 2025-11-23 01:30:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:42.797543 | orchestrator | 2025-11-23 01:30:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:45.841699 | orchestrator | 2025-11-23 01:30:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:45.841851 | orchestrator | 2025-11-23 01:30:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:48.882903 | orchestrator | 2025-11-23 01:30:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:48.883025 | orchestrator | 2025-11-23 01:30:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:51.924986 | orchestrator | 2025-11-23 01:30:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:51.925097 | orchestrator | 2025-11-23 01:30:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:54.961178 | orchestrator | 2025-11-23 01:30:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:54.961276 | orchestrator | 2025-11-23 01:30:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:30:58.002379 | orchestrator | 2025-11-23 01:30:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:30:58.002501 | orchestrator | 2025-11-23 01:30:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:01.040940 | orchestrator | 2025-11-23 01:31:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:01.041043 | orchestrator | 2025-11-23 01:31:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:04.080654 | orchestrator | 2025-11-23 01:31:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:04.080841 | orchestrator | 2025-11-23 01:31:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:07.122305 | orchestrator | 2025-11-23 01:31:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:07.122440 | orchestrator | 2025-11-23 01:31:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:10.157111 | orchestrator | 2025-11-23 01:31:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:10.157226 | orchestrator | 2025-11-23 01:31:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:13.198470 | orchestrator | 2025-11-23 01:31:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:13.198563 | orchestrator | 2025-11-23 01:31:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:16.236811 | orchestrator | 2025-11-23 01:31:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:16.236911 | orchestrator | 2025-11-23 01:31:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:19.274242 | orchestrator | 2025-11-23 01:31:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:19.274336 | orchestrator | 2025-11-23 01:31:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:22.306660 | orchestrator | 2025-11-23 01:31:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:22.306737 | orchestrator | 2025-11-23 01:31:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:25.349572 | orchestrator | 2025-11-23 01:31:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:25.350596 | orchestrator | 2025-11-23 01:31:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:28.388031 | orchestrator | 2025-11-23 01:31:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:28.388134 | orchestrator | 2025-11-23 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:31.430701 | orchestrator | 2025-11-23 01:31:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:31.430862 | orchestrator | 2025-11-23 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:34.476307 | orchestrator | 2025-11-23 01:31:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:34.476422 | orchestrator | 2025-11-23 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:37.517506 | orchestrator | 2025-11-23 01:31:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:37.517620 | orchestrator | 2025-11-23 01:31:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:40.553977 | orchestrator | 2025-11-23 01:31:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:40.554188 | orchestrator | 2025-11-23 01:31:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:43.592694 | orchestrator | 2025-11-23 01:31:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:43.592882 | orchestrator | 2025-11-23 01:31:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:46.634231 | orchestrator | 2025-11-23 01:31:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:46.634329 | orchestrator | 2025-11-23 01:31:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:49.674285 | orchestrator | 2025-11-23 01:31:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:49.674381 | orchestrator | 2025-11-23 01:31:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:52.716122 | orchestrator | 2025-11-23 01:31:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:52.716263 | orchestrator | 2025-11-23 01:31:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:55.759061 | orchestrator | 2025-11-23 01:31:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:55.759166 | orchestrator | 2025-11-23 01:31:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:31:58.802922 | orchestrator | 2025-11-23 01:31:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:31:58.803027 | orchestrator | 2025-11-23 01:31:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:01.837078 | orchestrator | 2025-11-23 01:32:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:01.837179 | orchestrator | 2025-11-23 01:32:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:04.876856 | orchestrator | 2025-11-23 01:32:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:04.876955 | orchestrator | 2025-11-23 01:32:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:07.915340 | orchestrator | 2025-11-23 01:32:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:07.915441 | orchestrator | 2025-11-23 01:32:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:10.957352 | orchestrator | 2025-11-23 01:32:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:10.957438 | orchestrator | 2025-11-23 01:32:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:13.992678 | orchestrator | 2025-11-23 01:32:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:13.992869 | orchestrator | 2025-11-23 01:32:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:17.032960 | orchestrator | 2025-11-23 01:32:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:17.033040 | orchestrator | 2025-11-23 01:32:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:20.072624 | orchestrator | 2025-11-23 01:32:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:20.072715 | orchestrator | 2025-11-23 01:32:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:23.113924 | orchestrator | 2025-11-23 01:32:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:23.114086 | orchestrator | 2025-11-23 01:32:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:26.153645 | orchestrator | 2025-11-23 01:32:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:26.153836 | orchestrator | 2025-11-23 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:29.199545 | orchestrator | 2025-11-23 01:32:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:29.199648 | orchestrator | 2025-11-23 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:32.238760 | orchestrator | 2025-11-23 01:32:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:32.238901 | orchestrator | 2025-11-23 01:32:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:35.272483 | orchestrator | 2025-11-23 01:32:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:35.272626 | orchestrator | 2025-11-23 01:32:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:38.315063 | orchestrator | 2025-11-23 01:32:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:38.315191 | orchestrator | 2025-11-23 01:32:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:41.350311 | orchestrator | 2025-11-23 01:32:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:41.350398 | orchestrator | 2025-11-23 01:32:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:44.389385 | orchestrator | 2025-11-23 01:32:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:44.389486 | orchestrator | 2025-11-23 01:32:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:47.434703 | orchestrator | 2025-11-23 01:32:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:47.434876 | orchestrator | 2025-11-23 01:32:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:50.471474 | orchestrator | 2025-11-23 01:32:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:50.471554 | orchestrator | 2025-11-23 01:32:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:53.514646 | orchestrator | 2025-11-23 01:32:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:53.514760 | orchestrator | 2025-11-23 01:32:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:56.555879 | orchestrator | 2025-11-23 01:32:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:56.556008 | orchestrator | 2025-11-23 01:32:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:32:59.598971 | orchestrator | 2025-11-23 01:32:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:32:59.599083 | orchestrator | 2025-11-23 01:32:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:02.639230 | orchestrator | 2025-11-23 01:33:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:02.639333 | orchestrator | 2025-11-23 01:33:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:05.678357 | orchestrator | 2025-11-23 01:33:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:05.678459 | orchestrator | 2025-11-23 01:33:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:08.721629 | orchestrator | 2025-11-23 01:33:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:08.721727 | orchestrator | 2025-11-23 01:33:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:11.755057 | orchestrator | 2025-11-23 01:33:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:11.755180 | orchestrator | 2025-11-23 01:33:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:14.790222 | orchestrator | 2025-11-23 01:33:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:14.790325 | orchestrator | 2025-11-23 01:33:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:17.831181 | orchestrator | 2025-11-23 01:33:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:17.831303 | orchestrator | 2025-11-23 01:33:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:20.869223 | orchestrator | 2025-11-23 01:33:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:20.869325 | orchestrator | 2025-11-23 01:33:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:23.912175 | orchestrator | 2025-11-23 01:33:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:23.912304 | orchestrator | 2025-11-23 01:33:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:26.953743 | orchestrator | 2025-11-23 01:33:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:26.953897 | orchestrator | 2025-11-23 01:33:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:29.995936 | orchestrator | 2025-11-23 01:33:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:29.996070 | orchestrator | 2025-11-23 01:33:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:33.041613 | orchestrator | 2025-11-23 01:33:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:33.041737 | orchestrator | 2025-11-23 01:33:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:36.083034 | orchestrator | 2025-11-23 01:33:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:36.083134 | orchestrator | 2025-11-23 01:33:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:39.129096 | orchestrator | 2025-11-23 01:33:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:39.129197 | orchestrator | 2025-11-23 01:33:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:42.171264 | orchestrator | 2025-11-23 01:33:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:42.171363 | orchestrator | 2025-11-23 01:33:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:45.212675 | orchestrator | 2025-11-23 01:33:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:45.212771 | orchestrator | 2025-11-23 01:33:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:48.251423 | orchestrator | 2025-11-23 01:33:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:48.251552 | orchestrator | 2025-11-23 01:33:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:51.295453 | orchestrator | 2025-11-23 01:33:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:51.295583 | orchestrator | 2025-11-23 01:33:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:54.336091 | orchestrator | 2025-11-23 01:33:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:54.336196 | orchestrator | 2025-11-23 01:33:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:33:57.375946 | orchestrator | 2025-11-23 01:33:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:33:57.376035 | orchestrator | 2025-11-23 01:33:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:00.419157 | orchestrator | 2025-11-23 01:34:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:00.419272 | orchestrator | 2025-11-23 01:34:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:03.458559 | orchestrator | 2025-11-23 01:34:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:03.458672 | orchestrator | 2025-11-23 01:34:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:06.497314 | orchestrator | 2025-11-23 01:34:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:06.497413 | orchestrator | 2025-11-23 01:34:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:09.543245 | orchestrator | 2025-11-23 01:34:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:09.543411 | orchestrator | 2025-11-23 01:34:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:12.577771 | orchestrator | 2025-11-23 01:34:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:12.577881 | orchestrator | 2025-11-23 01:34:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:15.618652 | orchestrator | 2025-11-23 01:34:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:15.618742 | orchestrator | 2025-11-23 01:34:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:18.662375 | orchestrator | 2025-11-23 01:34:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:18.662494 | orchestrator | 2025-11-23 01:34:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:21.708133 | orchestrator | 2025-11-23 01:34:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:21.708245 | orchestrator | 2025-11-23 01:34:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:24.751285 | orchestrator | 2025-11-23 01:34:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:24.751386 | orchestrator | 2025-11-23 01:34:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:27.790617 | orchestrator | 2025-11-23 01:34:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:27.790712 | orchestrator | 2025-11-23 01:34:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:30.836756 | orchestrator | 2025-11-23 01:34:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:30.836932 | orchestrator | 2025-11-23 01:34:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:33.882919 | orchestrator | 2025-11-23 01:34:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:33.883019 | orchestrator | 2025-11-23 01:34:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:36.924738 | orchestrator | 2025-11-23 01:34:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:36.924907 | orchestrator | 2025-11-23 01:34:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:39.965736 | orchestrator | 2025-11-23 01:34:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:39.965889 | orchestrator | 2025-11-23 01:34:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:43.004922 | orchestrator | 2025-11-23 01:34:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:43.005020 | orchestrator | 2025-11-23 01:34:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:46.046618 | orchestrator | 2025-11-23 01:34:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:46.046760 | orchestrator | 2025-11-23 01:34:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:49.091902 | orchestrator | 2025-11-23 01:34:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:49.092005 | orchestrator | 2025-11-23 01:34:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:52.126294 | orchestrator | 2025-11-23 01:34:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:52.126401 | orchestrator | 2025-11-23 01:34:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:55.165789 | orchestrator | 2025-11-23 01:34:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:55.165982 | orchestrator | 2025-11-23 01:34:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:34:58.205380 | orchestrator | 2025-11-23 01:34:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:34:58.205491 | orchestrator | 2025-11-23 01:34:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:01.243441 | orchestrator | 2025-11-23 01:35:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:01.243547 | orchestrator | 2025-11-23 01:35:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:04.285383 | orchestrator | 2025-11-23 01:35:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:04.285483 | orchestrator | 2025-11-23 01:35:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:07.327627 | orchestrator | 2025-11-23 01:35:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:07.327759 | orchestrator | 2025-11-23 01:35:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:10.368527 | orchestrator | 2025-11-23 01:35:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:10.368622 | orchestrator | 2025-11-23 01:35:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:13.409380 | orchestrator | 2025-11-23 01:35:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:13.409478 | orchestrator | 2025-11-23 01:35:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:16.440620 | orchestrator | 2025-11-23 01:35:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:16.440719 | orchestrator | 2025-11-23 01:35:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:19.478463 | orchestrator | 2025-11-23 01:35:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:19.478562 | orchestrator | 2025-11-23 01:35:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:22.517007 | orchestrator | 2025-11-23 01:35:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:22.517111 | orchestrator | 2025-11-23 01:35:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:25.553091 | orchestrator | 2025-11-23 01:35:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:25.553198 | orchestrator | 2025-11-23 01:35:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:28.593993 | orchestrator | 2025-11-23 01:35:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:28.594177 | orchestrator | 2025-11-23 01:35:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:31.632802 | orchestrator | 2025-11-23 01:35:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:31.632955 | orchestrator | 2025-11-23 01:35:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:34.667960 | orchestrator | 2025-11-23 01:35:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:34.668028 | orchestrator | 2025-11-23 01:35:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:37.706590 | orchestrator | 2025-11-23 01:35:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:37.706696 | orchestrator | 2025-11-23 01:35:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:40.744566 | orchestrator | 2025-11-23 01:35:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:40.744711 | orchestrator | 2025-11-23 01:35:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:43.785955 | orchestrator | 2025-11-23 01:35:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:43.786128 | orchestrator | 2025-11-23 01:35:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:46.823973 | orchestrator | 2025-11-23 01:35:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:46.824071 | orchestrator | 2025-11-23 01:35:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:49.856603 | orchestrator | 2025-11-23 01:35:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:49.856667 | orchestrator | 2025-11-23 01:35:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:52.894799 | orchestrator | 2025-11-23 01:35:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:52.894955 | orchestrator | 2025-11-23 01:35:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:55.933154 | orchestrator | 2025-11-23 01:35:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:55.933257 | orchestrator | 2025-11-23 01:35:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:35:58.974597 | orchestrator | 2025-11-23 01:35:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:35:58.974714 | orchestrator | 2025-11-23 01:35:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:02.024604 | orchestrator | 2025-11-23 01:36:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:02.024733 | orchestrator | 2025-11-23 01:36:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:05.066202 | orchestrator | 2025-11-23 01:36:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:05.066303 | orchestrator | 2025-11-23 01:36:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:08.104550 | orchestrator | 2025-11-23 01:36:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:08.104650 | orchestrator | 2025-11-23 01:36:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:11.141658 | orchestrator | 2025-11-23 01:36:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:11.141762 | orchestrator | 2025-11-23 01:36:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:14.187560 | orchestrator | 2025-11-23 01:36:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:14.187665 | orchestrator | 2025-11-23 01:36:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:17.230371 | orchestrator | 2025-11-23 01:36:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:17.230469 | orchestrator | 2025-11-23 01:36:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:20.268429 | orchestrator | 2025-11-23 01:36:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:20.268522 | orchestrator | 2025-11-23 01:36:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:23.304591 | orchestrator | 2025-11-23 01:36:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:23.304713 | orchestrator | 2025-11-23 01:36:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:26.345942 | orchestrator | 2025-11-23 01:36:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:26.346203 | orchestrator | 2025-11-23 01:36:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:29.386745 | orchestrator | 2025-11-23 01:36:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:29.386954 | orchestrator | 2025-11-23 01:36:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:32.426336 | orchestrator | 2025-11-23 01:36:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:32.426448 | orchestrator | 2025-11-23 01:36:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:35.462197 | orchestrator | 2025-11-23 01:36:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:35.462308 | orchestrator | 2025-11-23 01:36:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:38.502467 | orchestrator | 2025-11-23 01:36:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:38.502567 | orchestrator | 2025-11-23 01:36:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:41.543970 | orchestrator | 2025-11-23 01:36:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:41.544139 | orchestrator | 2025-11-23 01:36:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:44.580590 | orchestrator | 2025-11-23 01:36:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:44.580711 | orchestrator | 2025-11-23 01:36:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:47.616285 | orchestrator | 2025-11-23 01:36:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:47.616387 | orchestrator | 2025-11-23 01:36:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:50.655238 | orchestrator | 2025-11-23 01:36:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:50.655341 | orchestrator | 2025-11-23 01:36:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:53.696162 | orchestrator | 2025-11-23 01:36:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:53.696285 | orchestrator | 2025-11-23 01:36:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:56.739962 | orchestrator | 2025-11-23 01:36:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:56.740054 | orchestrator | 2025-11-23 01:36:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:36:59.781912 | orchestrator | 2025-11-23 01:36:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:36:59.781990 | orchestrator | 2025-11-23 01:36:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:02.827124 | orchestrator | 2025-11-23 01:37:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:02.827200 | orchestrator | 2025-11-23 01:37:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:05.867110 | orchestrator | 2025-11-23 01:37:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:05.867210 | orchestrator | 2025-11-23 01:37:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:08.908277 | orchestrator | 2025-11-23 01:37:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:08.908403 | orchestrator | 2025-11-23 01:37:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:11.942443 | orchestrator | 2025-11-23 01:37:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:11.942550 | orchestrator | 2025-11-23 01:37:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:14.983270 | orchestrator | 2025-11-23 01:37:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:14.983418 | orchestrator | 2025-11-23 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:18.029040 | orchestrator | 2025-11-23 01:37:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:18.029145 | orchestrator | 2025-11-23 01:37:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:21.067137 | orchestrator | 2025-11-23 01:37:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:21.067247 | orchestrator | 2025-11-23 01:37:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:24.105862 | orchestrator | 2025-11-23 01:37:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:24.106083 | orchestrator | 2025-11-23 01:37:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:27.145761 | orchestrator | 2025-11-23 01:37:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:27.145862 | orchestrator | 2025-11-23 01:37:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:30.183018 | orchestrator | 2025-11-23 01:37:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:30.183100 | orchestrator | 2025-11-23 01:37:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:33.222215 | orchestrator | 2025-11-23 01:37:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:33.222317 | orchestrator | 2025-11-23 01:37:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:36.257597 | orchestrator | 2025-11-23 01:37:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:36.257690 | orchestrator | 2025-11-23 01:37:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:39.298528 | orchestrator | 2025-11-23 01:37:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:39.298625 | orchestrator | 2025-11-23 01:37:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:42.336600 | orchestrator | 2025-11-23 01:37:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:42.336692 | orchestrator | 2025-11-23 01:37:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:45.377790 | orchestrator | 2025-11-23 01:37:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:45.377858 | orchestrator | 2025-11-23 01:37:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:48.420347 | orchestrator | 2025-11-23 01:37:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:48.420465 | orchestrator | 2025-11-23 01:37:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:51.453540 | orchestrator | 2025-11-23 01:37:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:51.453639 | orchestrator | 2025-11-23 01:37:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:54.493538 | orchestrator | 2025-11-23 01:37:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:54.493640 | orchestrator | 2025-11-23 01:37:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:37:57.535017 | orchestrator | 2025-11-23 01:37:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:37:57.535140 | orchestrator | 2025-11-23 01:37:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:00.573066 | orchestrator | 2025-11-23 01:38:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:00.573178 | orchestrator | 2025-11-23 01:38:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:03.614968 | orchestrator | 2025-11-23 01:38:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:03.615052 | orchestrator | 2025-11-23 01:38:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:06.658790 | orchestrator | 2025-11-23 01:38:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:06.658890 | orchestrator | 2025-11-23 01:38:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:09.697559 | orchestrator | 2025-11-23 01:38:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:09.697656 | orchestrator | 2025-11-23 01:38:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:12.734225 | orchestrator | 2025-11-23 01:38:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:12.734326 | orchestrator | 2025-11-23 01:38:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:15.773504 | orchestrator | 2025-11-23 01:38:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:15.773610 | orchestrator | 2025-11-23 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:18.818351 | orchestrator | 2025-11-23 01:38:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:18.818451 | orchestrator | 2025-11-23 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:21.860865 | orchestrator | 2025-11-23 01:38:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:21.861048 | orchestrator | 2025-11-23 01:38:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:24.902259 | orchestrator | 2025-11-23 01:38:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:24.902364 | orchestrator | 2025-11-23 01:38:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:27.940730 | orchestrator | 2025-11-23 01:38:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:27.941404 | orchestrator | 2025-11-23 01:38:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:30.979131 | orchestrator | 2025-11-23 01:38:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:30.979229 | orchestrator | 2025-11-23 01:38:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:34.023687 | orchestrator | 2025-11-23 01:38:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:34.023787 | orchestrator | 2025-11-23 01:38:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:37.069879 | orchestrator | 2025-11-23 01:38:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:37.070137 | orchestrator | 2025-11-23 01:38:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:40.115108 | orchestrator | 2025-11-23 01:38:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:40.115196 | orchestrator | 2025-11-23 01:38:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:43.146366 | orchestrator | 2025-11-23 01:38:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:43.146468 | orchestrator | 2025-11-23 01:38:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:46.183632 | orchestrator | 2025-11-23 01:38:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:46.183728 | orchestrator | 2025-11-23 01:38:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:49.227001 | orchestrator | 2025-11-23 01:38:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:49.227098 | orchestrator | 2025-11-23 01:38:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:52.263183 | orchestrator | 2025-11-23 01:38:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:52.263282 | orchestrator | 2025-11-23 01:38:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:55.301797 | orchestrator | 2025-11-23 01:38:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:55.301880 | orchestrator | 2025-11-23 01:38:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:38:58.337868 | orchestrator | 2025-11-23 01:38:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:38:58.338104 | orchestrator | 2025-11-23 01:38:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:01.379697 | orchestrator | 2025-11-23 01:39:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:01.379794 | orchestrator | 2025-11-23 01:39:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:04.414391 | orchestrator | 2025-11-23 01:39:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:04.414491 | orchestrator | 2025-11-23 01:39:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:07.451676 | orchestrator | 2025-11-23 01:39:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:07.451766 | orchestrator | 2025-11-23 01:39:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:10.491708 | orchestrator | 2025-11-23 01:39:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:10.491811 | orchestrator | 2025-11-23 01:39:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:13.530269 | orchestrator | 2025-11-23 01:39:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:13.530359 | orchestrator | 2025-11-23 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:16.564991 | orchestrator | 2025-11-23 01:39:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:16.565094 | orchestrator | 2025-11-23 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:19.601076 | orchestrator | 2025-11-23 01:39:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:19.601161 | orchestrator | 2025-11-23 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:22.640686 | orchestrator | 2025-11-23 01:39:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:22.640759 | orchestrator | 2025-11-23 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:25.676411 | orchestrator | 2025-11-23 01:39:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:25.676513 | orchestrator | 2025-11-23 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:28.717470 | orchestrator | 2025-11-23 01:39:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:28.717573 | orchestrator | 2025-11-23 01:39:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:31.758772 | orchestrator | 2025-11-23 01:39:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:31.758851 | orchestrator | 2025-11-23 01:39:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:34.802515 | orchestrator | 2025-11-23 01:39:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:34.802569 | orchestrator | 2025-11-23 01:39:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:37.840905 | orchestrator | 2025-11-23 01:39:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:37.840995 | orchestrator | 2025-11-23 01:39:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:40.881535 | orchestrator | 2025-11-23 01:39:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:40.881620 | orchestrator | 2025-11-23 01:39:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:43.926701 | orchestrator | 2025-11-23 01:39:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:43.926831 | orchestrator | 2025-11-23 01:39:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:46.966300 | orchestrator | 2025-11-23 01:39:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:46.966418 | orchestrator | 2025-11-23 01:39:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:50.000384 | orchestrator | 2025-11-23 01:39:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:50.000479 | orchestrator | 2025-11-23 01:39:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:53.050837 | orchestrator | 2025-11-23 01:39:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:53.051112 | orchestrator | 2025-11-23 01:39:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:56.093323 | orchestrator | 2025-11-23 01:39:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:56.093402 | orchestrator | 2025-11-23 01:39:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:39:59.132352 | orchestrator | 2025-11-23 01:39:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:39:59.132456 | orchestrator | 2025-11-23 01:39:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:02.170844 | orchestrator | 2025-11-23 01:40:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:02.171066 | orchestrator | 2025-11-23 01:40:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:05.205727 | orchestrator | 2025-11-23 01:40:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:05.205836 | orchestrator | 2025-11-23 01:40:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:08.248264 | orchestrator | 2025-11-23 01:40:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:08.248374 | orchestrator | 2025-11-23 01:40:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:11.290279 | orchestrator | 2025-11-23 01:40:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:11.290381 | orchestrator | 2025-11-23 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:14.331629 | orchestrator | 2025-11-23 01:40:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:14.331732 | orchestrator | 2025-11-23 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:17.367920 | orchestrator | 2025-11-23 01:40:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:17.368076 | orchestrator | 2025-11-23 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:20.406082 | orchestrator | 2025-11-23 01:40:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:20.406188 | orchestrator | 2025-11-23 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:23.448582 | orchestrator | 2025-11-23 01:40:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:23.448694 | orchestrator | 2025-11-23 01:40:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:26.487864 | orchestrator | 2025-11-23 01:40:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:26.487979 | orchestrator | 2025-11-23 01:40:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:29.524829 | orchestrator | 2025-11-23 01:40:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:29.524942 | orchestrator | 2025-11-23 01:40:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:32.563207 | orchestrator | 2025-11-23 01:40:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:32.563309 | orchestrator | 2025-11-23 01:40:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:35.597689 | orchestrator | 2025-11-23 01:40:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:35.597873 | orchestrator | 2025-11-23 01:40:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:38.641818 | orchestrator | 2025-11-23 01:40:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:38.641919 | orchestrator | 2025-11-23 01:40:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:41.677806 | orchestrator | 2025-11-23 01:40:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:41.677908 | orchestrator | 2025-11-23 01:40:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:44.713738 | orchestrator | 2025-11-23 01:40:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:44.713814 | orchestrator | 2025-11-23 01:40:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:47.750343 | orchestrator | 2025-11-23 01:40:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:47.750443 | orchestrator | 2025-11-23 01:40:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:50.792574 | orchestrator | 2025-11-23 01:40:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:50.792676 | orchestrator | 2025-11-23 01:40:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:53.832730 | orchestrator | 2025-11-23 01:40:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:53.832840 | orchestrator | 2025-11-23 01:40:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:56.871413 | orchestrator | 2025-11-23 01:40:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:56.871512 | orchestrator | 2025-11-23 01:40:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:40:59.913693 | orchestrator | 2025-11-23 01:40:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:40:59.913800 | orchestrator | 2025-11-23 01:40:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:02.953505 | orchestrator | 2025-11-23 01:41:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:02.953600 | orchestrator | 2025-11-23 01:41:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:05.988428 | orchestrator | 2025-11-23 01:41:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:05.988516 | orchestrator | 2025-11-23 01:41:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:09.028939 | orchestrator | 2025-11-23 01:41:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:09.029135 | orchestrator | 2025-11-23 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:12.069101 | orchestrator | 2025-11-23 01:41:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:12.069213 | orchestrator | 2025-11-23 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:15.109211 | orchestrator | 2025-11-23 01:41:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:15.109317 | orchestrator | 2025-11-23 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:18.147489 | orchestrator | 2025-11-23 01:41:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:18.147596 | orchestrator | 2025-11-23 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:21.190308 | orchestrator | 2025-11-23 01:41:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:21.190408 | orchestrator | 2025-11-23 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:24.237359 | orchestrator | 2025-11-23 01:41:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:24.237462 | orchestrator | 2025-11-23 01:41:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:27.276296 | orchestrator | 2025-11-23 01:41:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:27.276395 | orchestrator | 2025-11-23 01:41:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:30.315148 | orchestrator | 2025-11-23 01:41:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:30.315245 | orchestrator | 2025-11-23 01:41:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:33.353623 | orchestrator | 2025-11-23 01:41:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:33.353730 | orchestrator | 2025-11-23 01:41:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:36.391638 | orchestrator | 2025-11-23 01:41:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:36.391747 | orchestrator | 2025-11-23 01:41:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:39.434728 | orchestrator | 2025-11-23 01:41:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:39.434846 | orchestrator | 2025-11-23 01:41:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:42.478306 | orchestrator | 2025-11-23 01:41:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:42.478404 | orchestrator | 2025-11-23 01:41:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:45.521537 | orchestrator | 2025-11-23 01:41:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:45.521729 | orchestrator | 2025-11-23 01:41:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:48.564250 | orchestrator | 2025-11-23 01:41:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:48.564348 | orchestrator | 2025-11-23 01:41:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:51.604840 | orchestrator | 2025-11-23 01:41:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:51.604972 | orchestrator | 2025-11-23 01:41:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:54.645850 | orchestrator | 2025-11-23 01:41:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:54.645921 | orchestrator | 2025-11-23 01:41:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:41:57.684740 | orchestrator | 2025-11-23 01:41:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:41:57.684851 | orchestrator | 2025-11-23 01:41:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:00.725814 | orchestrator | 2025-11-23 01:42:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:00.725904 | orchestrator | 2025-11-23 01:42:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:03.767108 | orchestrator | 2025-11-23 01:42:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:03.767244 | orchestrator | 2025-11-23 01:42:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:06.804453 | orchestrator | 2025-11-23 01:42:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:06.804543 | orchestrator | 2025-11-23 01:42:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:09.847840 | orchestrator | 2025-11-23 01:42:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:09.847948 | orchestrator | 2025-11-23 01:42:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:12.889092 | orchestrator | 2025-11-23 01:42:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:12.889197 | orchestrator | 2025-11-23 01:42:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:15.929811 | orchestrator | 2025-11-23 01:42:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:15.929924 | orchestrator | 2025-11-23 01:42:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:18.969130 | orchestrator | 2025-11-23 01:42:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:18.969251 | orchestrator | 2025-11-23 01:42:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:22.018715 | orchestrator | 2025-11-23 01:42:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:22.018835 | orchestrator | 2025-11-23 01:42:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:25.055918 | orchestrator | 2025-11-23 01:42:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:25.056048 | orchestrator | 2025-11-23 01:42:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:28.093530 | orchestrator | 2025-11-23 01:42:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:28.093637 | orchestrator | 2025-11-23 01:42:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:31.129487 | orchestrator | 2025-11-23 01:42:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:31.129568 | orchestrator | 2025-11-23 01:42:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:34.171514 | orchestrator | 2025-11-23 01:42:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:34.171635 | orchestrator | 2025-11-23 01:42:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:37.211752 | orchestrator | 2025-11-23 01:42:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:37.211855 | orchestrator | 2025-11-23 01:42:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:40.251239 | orchestrator | 2025-11-23 01:42:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:40.251345 | orchestrator | 2025-11-23 01:42:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:43.296672 | orchestrator | 2025-11-23 01:42:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:43.296773 | orchestrator | 2025-11-23 01:42:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:46.335334 | orchestrator | 2025-11-23 01:42:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:46.335451 | orchestrator | 2025-11-23 01:42:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:49.373900 | orchestrator | 2025-11-23 01:42:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:49.374158 | orchestrator | 2025-11-23 01:42:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:52.414545 | orchestrator | 2025-11-23 01:42:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:52.414651 | orchestrator | 2025-11-23 01:42:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:55.454744 | orchestrator | 2025-11-23 01:42:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:55.454856 | orchestrator | 2025-11-23 01:42:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:42:58.495277 | orchestrator | 2025-11-23 01:42:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:42:58.495355 | orchestrator | 2025-11-23 01:42:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:01.536282 | orchestrator | 2025-11-23 01:43:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:01.536384 | orchestrator | 2025-11-23 01:43:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:04.574587 | orchestrator | 2025-11-23 01:43:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:04.574690 | orchestrator | 2025-11-23 01:43:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:07.609095 | orchestrator | 2025-11-23 01:43:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:07.609302 | orchestrator | 2025-11-23 01:43:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:10.652405 | orchestrator | 2025-11-23 01:43:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:10.652511 | orchestrator | 2025-11-23 01:43:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:13.691165 | orchestrator | 2025-11-23 01:43:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:13.691247 | orchestrator | 2025-11-23 01:43:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:16.735572 | orchestrator | 2025-11-23 01:43:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:16.735681 | orchestrator | 2025-11-23 01:43:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:19.779825 | orchestrator | 2025-11-23 01:43:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:19.779894 | orchestrator | 2025-11-23 01:43:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:22.816258 | orchestrator | 2025-11-23 01:43:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:22.816371 | orchestrator | 2025-11-23 01:43:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:25.853500 | orchestrator | 2025-11-23 01:43:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:25.853599 | orchestrator | 2025-11-23 01:43:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:28.893190 | orchestrator | 2025-11-23 01:43:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:28.893321 | orchestrator | 2025-11-23 01:43:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:31.933881 | orchestrator | 2025-11-23 01:43:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:31.933981 | orchestrator | 2025-11-23 01:43:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:34.971565 | orchestrator | 2025-11-23 01:43:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:34.971674 | orchestrator | 2025-11-23 01:43:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:38.010851 | orchestrator | 2025-11-23 01:43:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:38.011092 | orchestrator | 2025-11-23 01:43:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:41.055375 | orchestrator | 2025-11-23 01:43:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:41.055486 | orchestrator | 2025-11-23 01:43:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:44.093724 | orchestrator | 2025-11-23 01:43:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:44.093827 | orchestrator | 2025-11-23 01:43:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:47.135637 | orchestrator | 2025-11-23 01:43:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:47.135735 | orchestrator | 2025-11-23 01:43:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:50.175675 | orchestrator | 2025-11-23 01:43:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:50.175774 | orchestrator | 2025-11-23 01:43:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:53.217743 | orchestrator | 2025-11-23 01:43:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:53.217876 | orchestrator | 2025-11-23 01:43:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:56.254541 | orchestrator | 2025-11-23 01:43:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:56.254627 | orchestrator | 2025-11-23 01:43:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:43:59.299587 | orchestrator | 2025-11-23 01:43:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:43:59.299700 | orchestrator | 2025-11-23 01:43:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:02.341305 | orchestrator | 2025-11-23 01:44:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:02.341404 | orchestrator | 2025-11-23 01:44:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:05.382355 | orchestrator | 2025-11-23 01:44:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:05.382449 | orchestrator | 2025-11-23 01:44:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:08.422622 | orchestrator | 2025-11-23 01:44:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:08.422742 | orchestrator | 2025-11-23 01:44:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:11.465333 | orchestrator | 2025-11-23 01:44:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:11.465439 | orchestrator | 2025-11-23 01:44:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:14.507009 | orchestrator | 2025-11-23 01:44:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:14.507205 | orchestrator | 2025-11-23 01:44:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:17.541233 | orchestrator | 2025-11-23 01:44:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:17.541337 | orchestrator | 2025-11-23 01:44:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:20.578709 | orchestrator | 2025-11-23 01:44:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:20.578808 | orchestrator | 2025-11-23 01:44:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:23.621786 | orchestrator | 2025-11-23 01:44:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:23.621917 | orchestrator | 2025-11-23 01:44:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:26.665247 | orchestrator | 2025-11-23 01:44:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:26.665378 | orchestrator | 2025-11-23 01:44:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:29.702313 | orchestrator | 2025-11-23 01:44:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:29.702419 | orchestrator | 2025-11-23 01:44:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:32.741634 | orchestrator | 2025-11-23 01:44:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:32.741786 | orchestrator | 2025-11-23 01:44:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:35.776384 | orchestrator | 2025-11-23 01:44:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:35.776469 | orchestrator | 2025-11-23 01:44:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:38.819957 | orchestrator | 2025-11-23 01:44:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:38.820068 | orchestrator | 2025-11-23 01:44:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:41.864854 | orchestrator | 2025-11-23 01:44:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:41.864931 | orchestrator | 2025-11-23 01:44:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:44.905173 | orchestrator | 2025-11-23 01:44:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:44.905274 | orchestrator | 2025-11-23 01:44:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:47.946634 | orchestrator | 2025-11-23 01:44:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:47.946738 | orchestrator | 2025-11-23 01:44:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:50.985395 | orchestrator | 2025-11-23 01:44:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:50.985484 | orchestrator | 2025-11-23 01:44:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:54.026766 | orchestrator | 2025-11-23 01:44:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:54.026898 | orchestrator | 2025-11-23 01:44:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:44:57.064900 | orchestrator | 2025-11-23 01:44:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:44:57.065024 | orchestrator | 2025-11-23 01:44:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:00.101392 | orchestrator | 2025-11-23 01:45:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:00.101621 | orchestrator | 2025-11-23 01:45:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:03.142591 | orchestrator | 2025-11-23 01:45:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:03.142703 | orchestrator | 2025-11-23 01:45:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:06.180567 | orchestrator | 2025-11-23 01:45:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:06.180687 | orchestrator | 2025-11-23 01:45:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:09.219181 | orchestrator | 2025-11-23 01:45:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:09.219283 | orchestrator | 2025-11-23 01:45:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:12.259737 | orchestrator | 2025-11-23 01:45:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:12.259846 | orchestrator | 2025-11-23 01:45:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:15.297292 | orchestrator | 2025-11-23 01:45:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:15.297423 | orchestrator | 2025-11-23 01:45:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:18.340168 | orchestrator | 2025-11-23 01:45:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:18.340274 | orchestrator | 2025-11-23 01:45:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:21.378366 | orchestrator | 2025-11-23 01:45:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:21.378514 | orchestrator | 2025-11-23 01:45:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:24.417652 | orchestrator | 2025-11-23 01:45:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:24.417728 | orchestrator | 2025-11-23 01:45:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:27.455165 | orchestrator | 2025-11-23 01:45:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:27.455260 | orchestrator | 2025-11-23 01:45:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:30.494742 | orchestrator | 2025-11-23 01:45:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:30.494838 | orchestrator | 2025-11-23 01:45:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:33.536374 | orchestrator | 2025-11-23 01:45:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:33.536446 | orchestrator | 2025-11-23 01:45:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:36.577726 | orchestrator | 2025-11-23 01:45:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:36.577832 | orchestrator | 2025-11-23 01:45:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:39.618740 | orchestrator | 2025-11-23 01:45:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:39.618821 | orchestrator | 2025-11-23 01:45:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:42.655765 | orchestrator | 2025-11-23 01:45:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:42.655848 | orchestrator | 2025-11-23 01:45:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:45.695195 | orchestrator | 2025-11-23 01:45:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:45.695294 | orchestrator | 2025-11-23 01:45:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:48.738858 | orchestrator | 2025-11-23 01:45:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:48.738946 | orchestrator | 2025-11-23 01:45:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:51.780071 | orchestrator | 2025-11-23 01:45:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:51.780206 | orchestrator | 2025-11-23 01:45:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:54.815854 | orchestrator | 2025-11-23 01:45:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:54.815955 | orchestrator | 2025-11-23 01:45:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:45:57.859853 | orchestrator | 2025-11-23 01:45:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:45:57.860723 | orchestrator | 2025-11-23 01:45:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:00.903887 | orchestrator | 2025-11-23 01:46:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:00.904022 | orchestrator | 2025-11-23 01:46:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:03.941368 | orchestrator | 2025-11-23 01:46:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:03.941474 | orchestrator | 2025-11-23 01:46:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:06.980657 | orchestrator | 2025-11-23 01:46:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:06.980760 | orchestrator | 2025-11-23 01:46:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:10.020490 | orchestrator | 2025-11-23 01:46:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:10.020671 | orchestrator | 2025-11-23 01:46:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:13.060210 | orchestrator | 2025-11-23 01:46:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:13.060312 | orchestrator | 2025-11-23 01:46:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:16.101765 | orchestrator | 2025-11-23 01:46:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:16.101915 | orchestrator | 2025-11-23 01:46:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:19.144856 | orchestrator | 2025-11-23 01:46:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:19.144947 | orchestrator | 2025-11-23 01:46:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:22.182822 | orchestrator | 2025-11-23 01:46:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:22.182904 | orchestrator | 2025-11-23 01:46:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:25.217846 | orchestrator | 2025-11-23 01:46:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:25.217965 | orchestrator | 2025-11-23 01:46:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:28.261392 | orchestrator | 2025-11-23 01:46:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:28.261493 | orchestrator | 2025-11-23 01:46:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:31.308311 | orchestrator | 2025-11-23 01:46:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:31.308395 | orchestrator | 2025-11-23 01:46:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:34.347850 | orchestrator | 2025-11-23 01:46:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:34.347979 | orchestrator | 2025-11-23 01:46:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:37.386732 | orchestrator | 2025-11-23 01:46:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:37.386838 | orchestrator | 2025-11-23 01:46:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:40.426794 | orchestrator | 2025-11-23 01:46:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:40.426929 | orchestrator | 2025-11-23 01:46:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:43.467745 | orchestrator | 2025-11-23 01:46:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:43.467845 | orchestrator | 2025-11-23 01:46:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:46.510958 | orchestrator | 2025-11-23 01:46:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:46.511094 | orchestrator | 2025-11-23 01:46:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:49.550286 | orchestrator | 2025-11-23 01:46:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:49.550384 | orchestrator | 2025-11-23 01:46:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:52.585611 | orchestrator | 2025-11-23 01:46:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:52.585694 | orchestrator | 2025-11-23 01:46:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:55.623453 | orchestrator | 2025-11-23 01:46:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:55.623555 | orchestrator | 2025-11-23 01:46:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:46:58.664190 | orchestrator | 2025-11-23 01:46:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:46:58.664265 | orchestrator | 2025-11-23 01:46:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:01.706681 | orchestrator | 2025-11-23 01:47:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:01.706803 | orchestrator | 2025-11-23 01:47:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:04.741627 | orchestrator | 2025-11-23 01:47:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:04.741794 | orchestrator | 2025-11-23 01:47:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:07.781823 | orchestrator | 2025-11-23 01:47:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:07.781923 | orchestrator | 2025-11-23 01:47:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:10.822354 | orchestrator | 2025-11-23 01:47:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:10.822444 | orchestrator | 2025-11-23 01:47:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:13.861270 | orchestrator | 2025-11-23 01:47:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:13.861375 | orchestrator | 2025-11-23 01:47:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:16.904182 | orchestrator | 2025-11-23 01:47:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:16.904294 | orchestrator | 2025-11-23 01:47:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:19.938355 | orchestrator | 2025-11-23 01:47:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:19.938456 | orchestrator | 2025-11-23 01:47:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:22.979667 | orchestrator | 2025-11-23 01:47:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:22.979774 | orchestrator | 2025-11-23 01:47:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:26.021186 | orchestrator | 2025-11-23 01:47:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:26.021298 | orchestrator | 2025-11-23 01:47:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:29.061959 | orchestrator | 2025-11-23 01:47:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:29.062146 | orchestrator | 2025-11-23 01:47:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:32.098217 | orchestrator | 2025-11-23 01:47:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:32.098301 | orchestrator | 2025-11-23 01:47:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:35.138286 | orchestrator | 2025-11-23 01:47:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:35.138391 | orchestrator | 2025-11-23 01:47:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:38.171886 | orchestrator | 2025-11-23 01:47:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:38.171986 | orchestrator | 2025-11-23 01:47:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:41.215621 | orchestrator | 2025-11-23 01:47:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:41.215862 | orchestrator | 2025-11-23 01:47:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:44.253379 | orchestrator | 2025-11-23 01:47:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:44.253491 | orchestrator | 2025-11-23 01:47:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:47.290505 | orchestrator | 2025-11-23 01:47:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:47.290629 | orchestrator | 2025-11-23 01:47:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:50.335348 | orchestrator | 2025-11-23 01:47:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:50.335426 | orchestrator | 2025-11-23 01:47:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:53.374008 | orchestrator | 2025-11-23 01:47:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:53.374262 | orchestrator | 2025-11-23 01:47:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:56.413527 | orchestrator | 2025-11-23 01:47:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:56.413674 | orchestrator | 2025-11-23 01:47:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:47:59.458654 | orchestrator | 2025-11-23 01:47:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:47:59.458776 | orchestrator | 2025-11-23 01:47:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:02.499018 | orchestrator | 2025-11-23 01:48:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:02.499205 | orchestrator | 2025-11-23 01:48:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:05.537982 | orchestrator | 2025-11-23 01:48:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:05.538166 | orchestrator | 2025-11-23 01:48:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:08.579254 | orchestrator | 2025-11-23 01:48:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:08.579349 | orchestrator | 2025-11-23 01:48:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:11.617656 | orchestrator | 2025-11-23 01:48:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:11.617729 | orchestrator | 2025-11-23 01:48:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:14.656729 | orchestrator | 2025-11-23 01:48:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:14.657542 | orchestrator | 2025-11-23 01:48:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:17.698567 | orchestrator | 2025-11-23 01:48:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:17.698694 | orchestrator | 2025-11-23 01:48:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:20.738736 | orchestrator | 2025-11-23 01:48:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:20.738873 | orchestrator | 2025-11-23 01:48:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:23.780941 | orchestrator | 2025-11-23 01:48:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:23.781065 | orchestrator | 2025-11-23 01:48:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:26.821215 | orchestrator | 2025-11-23 01:48:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:26.821349 | orchestrator | 2025-11-23 01:48:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:29.863381 | orchestrator | 2025-11-23 01:48:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:29.863532 | orchestrator | 2025-11-23 01:48:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:32.905685 | orchestrator | 2025-11-23 01:48:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:32.905854 | orchestrator | 2025-11-23 01:48:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:35.947124 | orchestrator | 2025-11-23 01:48:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:35.947303 | orchestrator | 2025-11-23 01:48:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:38.983051 | orchestrator | 2025-11-23 01:48:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:38.983206 | orchestrator | 2025-11-23 01:48:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:42.032969 | orchestrator | 2025-11-23 01:48:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:42.033100 | orchestrator | 2025-11-23 01:48:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:45.066769 | orchestrator | 2025-11-23 01:48:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:45.066853 | orchestrator | 2025-11-23 01:48:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:48.105536 | orchestrator | 2025-11-23 01:48:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:48.105632 | orchestrator | 2025-11-23 01:48:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:51.148555 | orchestrator | 2025-11-23 01:48:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:51.148660 | orchestrator | 2025-11-23 01:48:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:54.191618 | orchestrator | 2025-11-23 01:48:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:54.191708 | orchestrator | 2025-11-23 01:48:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:48:57.233923 | orchestrator | 2025-11-23 01:48:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:48:57.234126 | orchestrator | 2025-11-23 01:48:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:00.276585 | orchestrator | 2025-11-23 01:49:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:00.276678 | orchestrator | 2025-11-23 01:49:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:03.318727 | orchestrator | 2025-11-23 01:49:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:03.318823 | orchestrator | 2025-11-23 01:49:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:06.357865 | orchestrator | 2025-11-23 01:49:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:06.357932 | orchestrator | 2025-11-23 01:49:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:09.393284 | orchestrator | 2025-11-23 01:49:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:09.393386 | orchestrator | 2025-11-23 01:49:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:12.428291 | orchestrator | 2025-11-23 01:49:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:12.428366 | orchestrator | 2025-11-23 01:49:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:15.467403 | orchestrator | 2025-11-23 01:49:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:15.467508 | orchestrator | 2025-11-23 01:49:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:18.509642 | orchestrator | 2025-11-23 01:49:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:18.509737 | orchestrator | 2025-11-23 01:49:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:21.549844 | orchestrator | 2025-11-23 01:49:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:21.549913 | orchestrator | 2025-11-23 01:49:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:24.594351 | orchestrator | 2025-11-23 01:49:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:24.594449 | orchestrator | 2025-11-23 01:49:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:27.625971 | orchestrator | 2025-11-23 01:49:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:27.626141 | orchestrator | 2025-11-23 01:49:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:30.663218 | orchestrator | 2025-11-23 01:49:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:30.663307 | orchestrator | 2025-11-23 01:49:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:33.705597 | orchestrator | 2025-11-23 01:49:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:33.705695 | orchestrator | 2025-11-23 01:49:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:36.749706 | orchestrator | 2025-11-23 01:49:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:36.749847 | orchestrator | 2025-11-23 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:39.790623 | orchestrator | 2025-11-23 01:49:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:39.790692 | orchestrator | 2025-11-23 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:42.829079 | orchestrator | 2025-11-23 01:49:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:42.829205 | orchestrator | 2025-11-23 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:45.870838 | orchestrator | 2025-11-23 01:49:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:45.870950 | orchestrator | 2025-11-23 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:48.915931 | orchestrator | 2025-11-23 01:49:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:48.916030 | orchestrator | 2025-11-23 01:49:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:51.958085 | orchestrator | 2025-11-23 01:49:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:51.958160 | orchestrator | 2025-11-23 01:49:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:55.016966 | orchestrator | 2025-11-23 01:49:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:55.017070 | orchestrator | 2025-11-23 01:49:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:49:58.058158 | orchestrator | 2025-11-23 01:49:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:49:58.058327 | orchestrator | 2025-11-23 01:49:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:01.095208 | orchestrator | 2025-11-23 01:50:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:01.095278 | orchestrator | 2025-11-23 01:50:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:04.133810 | orchestrator | 2025-11-23 01:50:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:04.133909 | orchestrator | 2025-11-23 01:50:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:07.171967 | orchestrator | 2025-11-23 01:50:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:07.172106 | orchestrator | 2025-11-23 01:50:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:10.215082 | orchestrator | 2025-11-23 01:50:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:10.215212 | orchestrator | 2025-11-23 01:50:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:13.255797 | orchestrator | 2025-11-23 01:50:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:13.255899 | orchestrator | 2025-11-23 01:50:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:16.287295 | orchestrator | 2025-11-23 01:50:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:16.287392 | orchestrator | 2025-11-23 01:50:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:19.333646 | orchestrator | 2025-11-23 01:50:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:19.333742 | orchestrator | 2025-11-23 01:50:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:22.374864 | orchestrator | 2025-11-23 01:50:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:22.374981 | orchestrator | 2025-11-23 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:25.413565 | orchestrator | 2025-11-23 01:50:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:25.413645 | orchestrator | 2025-11-23 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:28.454358 | orchestrator | 2025-11-23 01:50:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:28.454451 | orchestrator | 2025-11-23 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:31.488658 | orchestrator | 2025-11-23 01:50:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:31.488818 | orchestrator | 2025-11-23 01:50:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:34.533530 | orchestrator | 2025-11-23 01:50:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:34.533995 | orchestrator | 2025-11-23 01:50:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:37.563384 | orchestrator | 2025-11-23 01:50:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:37.563489 | orchestrator | 2025-11-23 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:40.602634 | orchestrator | 2025-11-23 01:50:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:40.602754 | orchestrator | 2025-11-23 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:43.645334 | orchestrator | 2025-11-23 01:50:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:43.645420 | orchestrator | 2025-11-23 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:46.686604 | orchestrator | 2025-11-23 01:50:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:46.686691 | orchestrator | 2025-11-23 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:49.727959 | orchestrator | 2025-11-23 01:50:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:49.728085 | orchestrator | 2025-11-23 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:52.766746 | orchestrator | 2025-11-23 01:50:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:52.766843 | orchestrator | 2025-11-23 01:50:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:55.810745 | orchestrator | 2025-11-23 01:50:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:55.810842 | orchestrator | 2025-11-23 01:50:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:50:58.852419 | orchestrator | 2025-11-23 01:50:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:50:58.852526 | orchestrator | 2025-11-23 01:50:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:01.893067 | orchestrator | 2025-11-23 01:51:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:01.893169 | orchestrator | 2025-11-23 01:51:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:04.935878 | orchestrator | 2025-11-23 01:51:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:04.936016 | orchestrator | 2025-11-23 01:51:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:07.971463 | orchestrator | 2025-11-23 01:51:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:07.971586 | orchestrator | 2025-11-23 01:51:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:11.019656 | orchestrator | 2025-11-23 01:51:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:11.020081 | orchestrator | 2025-11-23 01:51:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:14.053928 | orchestrator | 2025-11-23 01:51:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:14.054087 | orchestrator | 2025-11-23 01:51:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:17.095016 | orchestrator | 2025-11-23 01:51:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:17.095121 | orchestrator | 2025-11-23 01:51:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:20.138619 | orchestrator | 2025-11-23 01:51:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:20.138708 | orchestrator | 2025-11-23 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:23.182167 | orchestrator | 2025-11-23 01:51:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:23.182511 | orchestrator | 2025-11-23 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:26.225853 | orchestrator | 2025-11-23 01:51:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:26.225952 | orchestrator | 2025-11-23 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:29.265530 | orchestrator | 2025-11-23 01:51:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:29.265608 | orchestrator | 2025-11-23 01:51:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:32.310151 | orchestrator | 2025-11-23 01:51:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:32.310368 | orchestrator | 2025-11-23 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:35.349583 | orchestrator | 2025-11-23 01:51:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:35.349692 | orchestrator | 2025-11-23 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:38.394089 | orchestrator | 2025-11-23 01:51:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:38.394246 | orchestrator | 2025-11-23 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:41.435802 | orchestrator | 2025-11-23 01:51:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:41.435902 | orchestrator | 2025-11-23 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:44.479427 | orchestrator | 2025-11-23 01:51:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:44.479506 | orchestrator | 2025-11-23 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:47.510893 | orchestrator | 2025-11-23 01:51:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:47.510995 | orchestrator | 2025-11-23 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:50.553038 | orchestrator | 2025-11-23 01:51:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:50.553147 | orchestrator | 2025-11-23 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:53.594744 | orchestrator | 2025-11-23 01:51:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:53.594832 | orchestrator | 2025-11-23 01:51:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:56.635750 | orchestrator | 2025-11-23 01:51:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:56.636624 | orchestrator | 2025-11-23 01:51:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:51:59.678158 | orchestrator | 2025-11-23 01:51:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:51:59.678253 | orchestrator | 2025-11-23 01:51:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:02.722285 | orchestrator | 2025-11-23 01:52:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:02.722391 | orchestrator | 2025-11-23 01:52:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:05.760433 | orchestrator | 2025-11-23 01:52:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:05.760538 | orchestrator | 2025-11-23 01:52:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:08.796643 | orchestrator | 2025-11-23 01:52:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:08.796746 | orchestrator | 2025-11-23 01:52:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:11.832892 | orchestrator | 2025-11-23 01:52:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:11.832979 | orchestrator | 2025-11-23 01:52:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:14.871777 | orchestrator | 2025-11-23 01:52:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:14.871898 | orchestrator | 2025-11-23 01:52:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:17.912341 | orchestrator | 2025-11-23 01:52:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:17.912441 | orchestrator | 2025-11-23 01:52:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:20.955629 | orchestrator | 2025-11-23 01:52:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:20.955760 | orchestrator | 2025-11-23 01:52:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:23.994705 | orchestrator | 2025-11-23 01:52:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:23.994806 | orchestrator | 2025-11-23 01:52:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:27.032768 | orchestrator | 2025-11-23 01:52:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:27.032871 | orchestrator | 2025-11-23 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:30.065632 | orchestrator | 2025-11-23 01:52:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:30.065731 | orchestrator | 2025-11-23 01:52:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:33.102367 | orchestrator | 2025-11-23 01:52:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:33.102469 | orchestrator | 2025-11-23 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:36.145439 | orchestrator | 2025-11-23 01:52:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:36.145595 | orchestrator | 2025-11-23 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:39.186903 | orchestrator | 2025-11-23 01:52:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:39.187028 | orchestrator | 2025-11-23 01:52:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:42.228724 | orchestrator | 2025-11-23 01:52:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:42.228826 | orchestrator | 2025-11-23 01:52:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:45.265186 | orchestrator | 2025-11-23 01:52:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:45.265322 | orchestrator | 2025-11-23 01:52:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:48.306168 | orchestrator | 2025-11-23 01:52:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:48.306315 | orchestrator | 2025-11-23 01:52:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:51.350594 | orchestrator | 2025-11-23 01:52:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:51.350696 | orchestrator | 2025-11-23 01:52:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:54.392708 | orchestrator | 2025-11-23 01:52:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:54.392807 | orchestrator | 2025-11-23 01:52:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:52:57.431446 | orchestrator | 2025-11-23 01:52:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:52:57.431544 | orchestrator | 2025-11-23 01:52:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:00.473878 | orchestrator | 2025-11-23 01:53:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:00.473983 | orchestrator | 2025-11-23 01:53:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:03.515651 | orchestrator | 2025-11-23 01:53:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:03.515756 | orchestrator | 2025-11-23 01:53:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:06.551837 | orchestrator | 2025-11-23 01:53:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:06.551975 | orchestrator | 2025-11-23 01:53:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:09.596307 | orchestrator | 2025-11-23 01:53:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:09.596439 | orchestrator | 2025-11-23 01:53:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:12.638669 | orchestrator | 2025-11-23 01:53:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:12.638774 | orchestrator | 2025-11-23 01:53:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:15.678232 | orchestrator | 2025-11-23 01:53:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:15.678325 | orchestrator | 2025-11-23 01:53:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:18.719692 | orchestrator | 2025-11-23 01:53:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:18.719797 | orchestrator | 2025-11-23 01:53:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:21.764956 | orchestrator | 2025-11-23 01:53:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:21.765060 | orchestrator | 2025-11-23 01:53:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:24.802652 | orchestrator | 2025-11-23 01:53:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:24.802726 | orchestrator | 2025-11-23 01:53:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:27.842561 | orchestrator | 2025-11-23 01:53:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:27.842662 | orchestrator | 2025-11-23 01:53:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:30.882395 | orchestrator | 2025-11-23 01:53:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:30.882521 | orchestrator | 2025-11-23 01:53:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:33.921187 | orchestrator | 2025-11-23 01:53:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:33.921423 | orchestrator | 2025-11-23 01:53:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:36.962701 | orchestrator | 2025-11-23 01:53:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:36.962864 | orchestrator | 2025-11-23 01:53:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:39.998815 | orchestrator | 2025-11-23 01:53:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:39.998885 | orchestrator | 2025-11-23 01:53:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:43.043921 | orchestrator | 2025-11-23 01:53:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:43.044048 | orchestrator | 2025-11-23 01:53:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:46.083215 | orchestrator | 2025-11-23 01:53:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:46.083324 | orchestrator | 2025-11-23 01:53:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:49.123876 | orchestrator | 2025-11-23 01:53:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:49.123952 | orchestrator | 2025-11-23 01:53:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:52.163806 | orchestrator | 2025-11-23 01:53:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:52.163942 | orchestrator | 2025-11-23 01:53:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:55.199586 | orchestrator | 2025-11-23 01:53:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:55.199709 | orchestrator | 2025-11-23 01:53:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:53:58.243871 | orchestrator | 2025-11-23 01:53:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:53:58.243971 | orchestrator | 2025-11-23 01:53:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:01.286668 | orchestrator | 2025-11-23 01:54:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:01.286756 | orchestrator | 2025-11-23 01:54:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:04.329164 | orchestrator | 2025-11-23 01:54:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:04.329300 | orchestrator | 2025-11-23 01:54:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:07.365871 | orchestrator | 2025-11-23 01:54:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:07.366121 | orchestrator | 2025-11-23 01:54:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:10.409843 | orchestrator | 2025-11-23 01:54:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:10.409936 | orchestrator | 2025-11-23 01:54:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:13.451386 | orchestrator | 2025-11-23 01:54:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:13.451485 | orchestrator | 2025-11-23 01:54:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:16.491600 | orchestrator | 2025-11-23 01:54:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:16.491685 | orchestrator | 2025-11-23 01:54:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:19.529719 | orchestrator | 2025-11-23 01:54:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:19.529839 | orchestrator | 2025-11-23 01:54:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:22.569844 | orchestrator | 2025-11-23 01:54:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:22.569975 | orchestrator | 2025-11-23 01:54:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:25.605361 | orchestrator | 2025-11-23 01:54:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:25.605463 | orchestrator | 2025-11-23 01:54:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:28.644520 | orchestrator | 2025-11-23 01:54:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:28.644617 | orchestrator | 2025-11-23 01:54:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:31.681023 | orchestrator | 2025-11-23 01:54:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:31.681151 | orchestrator | 2025-11-23 01:54:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:34.722978 | orchestrator | 2025-11-23 01:54:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:34.723143 | orchestrator | 2025-11-23 01:54:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:37.766535 | orchestrator | 2025-11-23 01:54:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:37.766636 | orchestrator | 2025-11-23 01:54:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:40.808505 | orchestrator | 2025-11-23 01:54:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:40.809309 | orchestrator | 2025-11-23 01:54:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:43.853082 | orchestrator | 2025-11-23 01:54:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:43.853252 | orchestrator | 2025-11-23 01:54:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:46.894426 | orchestrator | 2025-11-23 01:54:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:46.894522 | orchestrator | 2025-11-23 01:54:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:49.931298 | orchestrator | 2025-11-23 01:54:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:49.931369 | orchestrator | 2025-11-23 01:54:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:52.975121 | orchestrator | 2025-11-23 01:54:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:52.975224 | orchestrator | 2025-11-23 01:54:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:56.015579 | orchestrator | 2025-11-23 01:54:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:56.015654 | orchestrator | 2025-11-23 01:54:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:54:59.056786 | orchestrator | 2025-11-23 01:54:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:54:59.056872 | orchestrator | 2025-11-23 01:54:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:02.092214 | orchestrator | 2025-11-23 01:55:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:02.092364 | orchestrator | 2025-11-23 01:55:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:05.129974 | orchestrator | 2025-11-23 01:55:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:05.130330 | orchestrator | 2025-11-23 01:55:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:08.167378 | orchestrator | 2025-11-23 01:55:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:08.167461 | orchestrator | 2025-11-23 01:55:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:11.203934 | orchestrator | 2025-11-23 01:55:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:11.204041 | orchestrator | 2025-11-23 01:55:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:14.243996 | orchestrator | 2025-11-23 01:55:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:14.244125 | orchestrator | 2025-11-23 01:55:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:17.282128 | orchestrator | 2025-11-23 01:55:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:17.282223 | orchestrator | 2025-11-23 01:55:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:20.325553 | orchestrator | 2025-11-23 01:55:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:20.325658 | orchestrator | 2025-11-23 01:55:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:23.364924 | orchestrator | 2025-11-23 01:55:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:23.364997 | orchestrator | 2025-11-23 01:55:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:26.404744 | orchestrator | 2025-11-23 01:55:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:26.404845 | orchestrator | 2025-11-23 01:55:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:29.444887 | orchestrator | 2025-11-23 01:55:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:29.444983 | orchestrator | 2025-11-23 01:55:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:32.483547 | orchestrator | 2025-11-23 01:55:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:32.483657 | orchestrator | 2025-11-23 01:55:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:35.523860 | orchestrator | 2025-11-23 01:55:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:35.523979 | orchestrator | 2025-11-23 01:55:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:38.558958 | orchestrator | 2025-11-23 01:55:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:38.559114 | orchestrator | 2025-11-23 01:55:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:41.596919 | orchestrator | 2025-11-23 01:55:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:41.597083 | orchestrator | 2025-11-23 01:55:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:44.637162 | orchestrator | 2025-11-23 01:55:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:44.637262 | orchestrator | 2025-11-23 01:55:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:47.679801 | orchestrator | 2025-11-23 01:55:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:47.679898 | orchestrator | 2025-11-23 01:55:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:50.721391 | orchestrator | 2025-11-23 01:55:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:50.721492 | orchestrator | 2025-11-23 01:55:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:53.760884 | orchestrator | 2025-11-23 01:55:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:53.761008 | orchestrator | 2025-11-23 01:55:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:56.800059 | orchestrator | 2025-11-23 01:55:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:56.800174 | orchestrator | 2025-11-23 01:55:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:55:59.845839 | orchestrator | 2025-11-23 01:55:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:55:59.845945 | orchestrator | 2025-11-23 01:55:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:02.884936 | orchestrator | 2025-11-23 01:56:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:02.885036 | orchestrator | 2025-11-23 01:56:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:05.922152 | orchestrator | 2025-11-23 01:56:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:05.922278 | orchestrator | 2025-11-23 01:56:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:08.966780 | orchestrator | 2025-11-23 01:56:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:08.966878 | orchestrator | 2025-11-23 01:56:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:12.005766 | orchestrator | 2025-11-23 01:56:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:12.005868 | orchestrator | 2025-11-23 01:56:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:15.045822 | orchestrator | 2025-11-23 01:56:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:15.045952 | orchestrator | 2025-11-23 01:56:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:18.088052 | orchestrator | 2025-11-23 01:56:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:18.088133 | orchestrator | 2025-11-23 01:56:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:21.120776 | orchestrator | 2025-11-23 01:56:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:21.120877 | orchestrator | 2025-11-23 01:56:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:24.157759 | orchestrator | 2025-11-23 01:56:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:24.157860 | orchestrator | 2025-11-23 01:56:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:27.198258 | orchestrator | 2025-11-23 01:56:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:27.198417 | orchestrator | 2025-11-23 01:56:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:30.237957 | orchestrator | 2025-11-23 01:56:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:30.238637 | orchestrator | 2025-11-23 01:56:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:33.278264 | orchestrator | 2025-11-23 01:56:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:33.278430 | orchestrator | 2025-11-23 01:56:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:36.317393 | orchestrator | 2025-11-23 01:56:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:36.317515 | orchestrator | 2025-11-23 01:56:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:39.354966 | orchestrator | 2025-11-23 01:56:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:39.355070 | orchestrator | 2025-11-23 01:56:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:42.395852 | orchestrator | 2025-11-23 01:56:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:42.395949 | orchestrator | 2025-11-23 01:56:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:45.443175 | orchestrator | 2025-11-23 01:56:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:45.443280 | orchestrator | 2025-11-23 01:56:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:48.483899 | orchestrator | 2025-11-23 01:56:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:48.484026 | orchestrator | 2025-11-23 01:56:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:51.520797 | orchestrator | 2025-11-23 01:56:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:51.520898 | orchestrator | 2025-11-23 01:56:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:54.566683 | orchestrator | 2025-11-23 01:56:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:54.566783 | orchestrator | 2025-11-23 01:56:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:56:57.607898 | orchestrator | 2025-11-23 01:56:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:56:57.608052 | orchestrator | 2025-11-23 01:56:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:00.648761 | orchestrator | 2025-11-23 01:57:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:00.648847 | orchestrator | 2025-11-23 01:57:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:03.682962 | orchestrator | 2025-11-23 01:57:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:03.683064 | orchestrator | 2025-11-23 01:57:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:06.722263 | orchestrator | 2025-11-23 01:57:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:06.722419 | orchestrator | 2025-11-23 01:57:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:09.761273 | orchestrator | 2025-11-23 01:57:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:09.761435 | orchestrator | 2025-11-23 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:12.803074 | orchestrator | 2025-11-23 01:57:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:12.803158 | orchestrator | 2025-11-23 01:57:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:15.843171 | orchestrator | 2025-11-23 01:57:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:15.843405 | orchestrator | 2025-11-23 01:57:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:18.885521 | orchestrator | 2025-11-23 01:57:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:18.885656 | orchestrator | 2025-11-23 01:57:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:21.923583 | orchestrator | 2025-11-23 01:57:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:21.923678 | orchestrator | 2025-11-23 01:57:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:24.961555 | orchestrator | 2025-11-23 01:57:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:24.961654 | orchestrator | 2025-11-23 01:57:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:28.003695 | orchestrator | 2025-11-23 01:57:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:28.003874 | orchestrator | 2025-11-23 01:57:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:31.041142 | orchestrator | 2025-11-23 01:57:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:31.041243 | orchestrator | 2025-11-23 01:57:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:34.082287 | orchestrator | 2025-11-23 01:57:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:34.082455 | orchestrator | 2025-11-23 01:57:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:37.122498 | orchestrator | 2025-11-23 01:57:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:37.122613 | orchestrator | 2025-11-23 01:57:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:40.164652 | orchestrator | 2025-11-23 01:57:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:40.164753 | orchestrator | 2025-11-23 01:57:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:43.211230 | orchestrator | 2025-11-23 01:57:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:43.211378 | orchestrator | 2025-11-23 01:57:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:46.251760 | orchestrator | 2025-11-23 01:57:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:46.251897 | orchestrator | 2025-11-23 01:57:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:49.290068 | orchestrator | 2025-11-23 01:57:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:49.290167 | orchestrator | 2025-11-23 01:57:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:52.329460 | orchestrator | 2025-11-23 01:57:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:52.329547 | orchestrator | 2025-11-23 01:57:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:55.370585 | orchestrator | 2025-11-23 01:57:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:55.370684 | orchestrator | 2025-11-23 01:57:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:57:58.413786 | orchestrator | 2025-11-23 01:57:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:57:58.413892 | orchestrator | 2025-11-23 01:57:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:01.453405 | orchestrator | 2025-11-23 01:58:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:01.453474 | orchestrator | 2025-11-23 01:58:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:04.494984 | orchestrator | 2025-11-23 01:58:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:04.495121 | orchestrator | 2025-11-23 01:58:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:07.529074 | orchestrator | 2025-11-23 01:58:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:07.529179 | orchestrator | 2025-11-23 01:58:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:10.566533 | orchestrator | 2025-11-23 01:58:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:10.566660 | orchestrator | 2025-11-23 01:58:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:13.610925 | orchestrator | 2025-11-23 01:58:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:13.611099 | orchestrator | 2025-11-23 01:58:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:16.650081 | orchestrator | 2025-11-23 01:58:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:16.650178 | orchestrator | 2025-11-23 01:58:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:19.690641 | orchestrator | 2025-11-23 01:58:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:19.690745 | orchestrator | 2025-11-23 01:58:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:22.730496 | orchestrator | 2025-11-23 01:58:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:22.730563 | orchestrator | 2025-11-23 01:58:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:25.771234 | orchestrator | 2025-11-23 01:58:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:25.771392 | orchestrator | 2025-11-23 01:58:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:28.816069 | orchestrator | 2025-11-23 01:58:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:28.816163 | orchestrator | 2025-11-23 01:58:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:31.855481 | orchestrator | 2025-11-23 01:58:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:31.855592 | orchestrator | 2025-11-23 01:58:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:34.900765 | orchestrator | 2025-11-23 01:58:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:34.900866 | orchestrator | 2025-11-23 01:58:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:37.938264 | orchestrator | 2025-11-23 01:58:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:37.938494 | orchestrator | 2025-11-23 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:40.974596 | orchestrator | 2025-11-23 01:58:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:40.974696 | orchestrator | 2025-11-23 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:44.018509 | orchestrator | 2025-11-23 01:58:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:44.018607 | orchestrator | 2025-11-23 01:58:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:47.054829 | orchestrator | 2025-11-23 01:58:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:47.054939 | orchestrator | 2025-11-23 01:58:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:50.099304 | orchestrator | 2025-11-23 01:58:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:50.099453 | orchestrator | 2025-11-23 01:58:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:53.137624 | orchestrator | 2025-11-23 01:58:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:53.137727 | orchestrator | 2025-11-23 01:58:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:56.176960 | orchestrator | 2025-11-23 01:58:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:56.177030 | orchestrator | 2025-11-23 01:58:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:58:59.217908 | orchestrator | 2025-11-23 01:58:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:58:59.218095 | orchestrator | 2025-11-23 01:58:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:02.254984 | orchestrator | 2025-11-23 01:59:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:02.255104 | orchestrator | 2025-11-23 01:59:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:05.295627 | orchestrator | 2025-11-23 01:59:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:05.295736 | orchestrator | 2025-11-23 01:59:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:08.335403 | orchestrator | 2025-11-23 01:59:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:08.335504 | orchestrator | 2025-11-23 01:59:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:11.375698 | orchestrator | 2025-11-23 01:59:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:11.375806 | orchestrator | 2025-11-23 01:59:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:14.412175 | orchestrator | 2025-11-23 01:59:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:14.412275 | orchestrator | 2025-11-23 01:59:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:17.454500 | orchestrator | 2025-11-23 01:59:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:17.454576 | orchestrator | 2025-11-23 01:59:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:20.492556 | orchestrator | 2025-11-23 01:59:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:20.492663 | orchestrator | 2025-11-23 01:59:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:23.529627 | orchestrator | 2025-11-23 01:59:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:23.529705 | orchestrator | 2025-11-23 01:59:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:26.568674 | orchestrator | 2025-11-23 01:59:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:26.568785 | orchestrator | 2025-11-23 01:59:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:29.610543 | orchestrator | 2025-11-23 01:59:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:29.610667 | orchestrator | 2025-11-23 01:59:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:32.651494 | orchestrator | 2025-11-23 01:59:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:32.651595 | orchestrator | 2025-11-23 01:59:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:35.691138 | orchestrator | 2025-11-23 01:59:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:35.691240 | orchestrator | 2025-11-23 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:38.729563 | orchestrator | 2025-11-23 01:59:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:38.729692 | orchestrator | 2025-11-23 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:41.771654 | orchestrator | 2025-11-23 01:59:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:41.771760 | orchestrator | 2025-11-23 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:44.813192 | orchestrator | 2025-11-23 01:59:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:44.813294 | orchestrator | 2025-11-23 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:47.859225 | orchestrator | 2025-11-23 01:59:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:47.859380 | orchestrator | 2025-11-23 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:50.899205 | orchestrator | 2025-11-23 01:59:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:50.899289 | orchestrator | 2025-11-23 01:59:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:53.938637 | orchestrator | 2025-11-23 01:59:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:53.938739 | orchestrator | 2025-11-23 01:59:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 01:59:56.974002 | orchestrator | 2025-11-23 01:59:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 01:59:56.974099 | orchestrator | 2025-11-23 01:59:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:00.013933 | orchestrator | 2025-11-23 02:00:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:00.014090 | orchestrator | 2025-11-23 02:00:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:03.052523 | orchestrator | 2025-11-23 02:00:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:03.052689 | orchestrator | 2025-11-23 02:00:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:06.090109 | orchestrator | 2025-11-23 02:00:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:06.090209 | orchestrator | 2025-11-23 02:00:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:09.127676 | orchestrator | 2025-11-23 02:00:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:09.127777 | orchestrator | 2025-11-23 02:00:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:12.167417 | orchestrator | 2025-11-23 02:00:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:12.167504 | orchestrator | 2025-11-23 02:00:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:15.211644 | orchestrator | 2025-11-23 02:00:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:15.211742 | orchestrator | 2025-11-23 02:00:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:18.251033 | orchestrator | 2025-11-23 02:00:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:18.251141 | orchestrator | 2025-11-23 02:00:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:21.289664 | orchestrator | 2025-11-23 02:00:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:21.289749 | orchestrator | 2025-11-23 02:00:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:24.327089 | orchestrator | 2025-11-23 02:00:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:24.327170 | orchestrator | 2025-11-23 02:00:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:27.366380 | orchestrator | 2025-11-23 02:00:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:27.366483 | orchestrator | 2025-11-23 02:00:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:30.411841 | orchestrator | 2025-11-23 02:00:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:30.411907 | orchestrator | 2025-11-23 02:00:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:33.449562 | orchestrator | 2025-11-23 02:00:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:33.449688 | orchestrator | 2025-11-23 02:00:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:36.485887 | orchestrator | 2025-11-23 02:00:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:36.485985 | orchestrator | 2025-11-23 02:00:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:39.528779 | orchestrator | 2025-11-23 02:00:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:39.528913 | orchestrator | 2025-11-23 02:00:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:42.568024 | orchestrator | 2025-11-23 02:00:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:42.568132 | orchestrator | 2025-11-23 02:00:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:45.604418 | orchestrator | 2025-11-23 02:00:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:45.604546 | orchestrator | 2025-11-23 02:00:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:48.645114 | orchestrator | 2025-11-23 02:00:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:48.645216 | orchestrator | 2025-11-23 02:00:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:51.681872 | orchestrator | 2025-11-23 02:00:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:51.681973 | orchestrator | 2025-11-23 02:00:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:54.723460 | orchestrator | 2025-11-23 02:00:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:54.723563 | orchestrator | 2025-11-23 02:00:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:00:57.765085 | orchestrator | 2025-11-23 02:00:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:00:57.765358 | orchestrator | 2025-11-23 02:00:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:00.802583 | orchestrator | 2025-11-23 02:01:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:00.802678 | orchestrator | 2025-11-23 02:01:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:03.835181 | orchestrator | 2025-11-23 02:01:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:03.835287 | orchestrator | 2025-11-23 02:01:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:06.877887 | orchestrator | 2025-11-23 02:01:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:06.877985 | orchestrator | 2025-11-23 02:01:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:09.919028 | orchestrator | 2025-11-23 02:01:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:09.919129 | orchestrator | 2025-11-23 02:01:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:12.960860 | orchestrator | 2025-11-23 02:01:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:12.961031 | orchestrator | 2025-11-23 02:01:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:16.000954 | orchestrator | 2025-11-23 02:01:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:16.001052 | orchestrator | 2025-11-23 02:01:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:19.038934 | orchestrator | 2025-11-23 02:01:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:19.039054 | orchestrator | 2025-11-23 02:01:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:22.072614 | orchestrator | 2025-11-23 02:01:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:22.072702 | orchestrator | 2025-11-23 02:01:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:25.109124 | orchestrator | 2025-11-23 02:01:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:25.109279 | orchestrator | 2025-11-23 02:01:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:28.149640 | orchestrator | 2025-11-23 02:01:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:28.149773 | orchestrator | 2025-11-23 02:01:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:31.193747 | orchestrator | 2025-11-23 02:01:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:31.193836 | orchestrator | 2025-11-23 02:01:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:34.231136 | orchestrator | 2025-11-23 02:01:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:34.231240 | orchestrator | 2025-11-23 02:01:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:37.266471 | orchestrator | 2025-11-23 02:01:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:37.266598 | orchestrator | 2025-11-23 02:01:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:40.301094 | orchestrator | 2025-11-23 02:01:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:40.301180 | orchestrator | 2025-11-23 02:01:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:43.340454 | orchestrator | 2025-11-23 02:01:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:43.340525 | orchestrator | 2025-11-23 02:01:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:46.371279 | orchestrator | 2025-11-23 02:01:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:46.371469 | orchestrator | 2025-11-23 02:01:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:49.410532 | orchestrator | 2025-11-23 02:01:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:49.410667 | orchestrator | 2025-11-23 02:01:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:52.449605 | orchestrator | 2025-11-23 02:01:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:52.449711 | orchestrator | 2025-11-23 02:01:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:55.489820 | orchestrator | 2025-11-23 02:01:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:55.489922 | orchestrator | 2025-11-23 02:01:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:01:58.531727 | orchestrator | 2025-11-23 02:01:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:01:58.531837 | orchestrator | 2025-11-23 02:01:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:01.571253 | orchestrator | 2025-11-23 02:02:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:01.571378 | orchestrator | 2025-11-23 02:02:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:04.623572 | orchestrator | 2025-11-23 02:02:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:04.623687 | orchestrator | 2025-11-23 02:02:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:07.652556 | orchestrator | 2025-11-23 02:02:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:07.652652 | orchestrator | 2025-11-23 02:02:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:10.693468 | orchestrator | 2025-11-23 02:02:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:10.693599 | orchestrator | 2025-11-23 02:02:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:13.734635 | orchestrator | 2025-11-23 02:02:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:13.734738 | orchestrator | 2025-11-23 02:02:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:16.770437 | orchestrator | 2025-11-23 02:02:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:16.770587 | orchestrator | 2025-11-23 02:02:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:19.807661 | orchestrator | 2025-11-23 02:02:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:19.807760 | orchestrator | 2025-11-23 02:02:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:22.846386 | orchestrator | 2025-11-23 02:02:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:22.846512 | orchestrator | 2025-11-23 02:02:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:25.887241 | orchestrator | 2025-11-23 02:02:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:25.887356 | orchestrator | 2025-11-23 02:02:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:28.929104 | orchestrator | 2025-11-23 02:02:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:28.929220 | orchestrator | 2025-11-23 02:02:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:31.969985 | orchestrator | 2025-11-23 02:02:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:31.970094 | orchestrator | 2025-11-23 02:02:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:35.010900 | orchestrator | 2025-11-23 02:02:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:35.010997 | orchestrator | 2025-11-23 02:02:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:38.045399 | orchestrator | 2025-11-23 02:02:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:38.045518 | orchestrator | 2025-11-23 02:02:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:41.087533 | orchestrator | 2025-11-23 02:02:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:41.087618 | orchestrator | 2025-11-23 02:02:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:44.125496 | orchestrator | 2025-11-23 02:02:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:44.125594 | orchestrator | 2025-11-23 02:02:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:47.169134 | orchestrator | 2025-11-23 02:02:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:47.169225 | orchestrator | 2025-11-23 02:02:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:50.210444 | orchestrator | 2025-11-23 02:02:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:50.210515 | orchestrator | 2025-11-23 02:02:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:53.250252 | orchestrator | 2025-11-23 02:02:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:53.250413 | orchestrator | 2025-11-23 02:02:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:56.288141 | orchestrator | 2025-11-23 02:02:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:56.288269 | orchestrator | 2025-11-23 02:02:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:02:59.331006 | orchestrator | 2025-11-23 02:02:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:02:59.331200 | orchestrator | 2025-11-23 02:02:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:02.365990 | orchestrator | 2025-11-23 02:03:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:02.366138 | orchestrator | 2025-11-23 02:03:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:05.415991 | orchestrator | 2025-11-23 02:03:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:05.416086 | orchestrator | 2025-11-23 02:03:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:08.456408 | orchestrator | 2025-11-23 02:03:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:08.456521 | orchestrator | 2025-11-23 02:03:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:11.491126 | orchestrator | 2025-11-23 02:03:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:11.491239 | orchestrator | 2025-11-23 02:03:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:14.530802 | orchestrator | 2025-11-23 02:03:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:14.530971 | orchestrator | 2025-11-23 02:03:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:17.577692 | orchestrator | 2025-11-23 02:03:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:17.577794 | orchestrator | 2025-11-23 02:03:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:20.621689 | orchestrator | 2025-11-23 02:03:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:20.621818 | orchestrator | 2025-11-23 02:03:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:23.663696 | orchestrator | 2025-11-23 02:03:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:23.663789 | orchestrator | 2025-11-23 02:03:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:26.706840 | orchestrator | 2025-11-23 02:03:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:26.706938 | orchestrator | 2025-11-23 02:03:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:29.747872 | orchestrator | 2025-11-23 02:03:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:29.747954 | orchestrator | 2025-11-23 02:03:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:32.787785 | orchestrator | 2025-11-23 02:03:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:32.787871 | orchestrator | 2025-11-23 02:03:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:35.834831 | orchestrator | 2025-11-23 02:03:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:35.834922 | orchestrator | 2025-11-23 02:03:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:38.870283 | orchestrator | 2025-11-23 02:03:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:38.870402 | orchestrator | 2025-11-23 02:03:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:41.905427 | orchestrator | 2025-11-23 02:03:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:41.905524 | orchestrator | 2025-11-23 02:03:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:44.944109 | orchestrator | 2025-11-23 02:03:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:44.944217 | orchestrator | 2025-11-23 02:03:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:47.986734 | orchestrator | 2025-11-23 02:03:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:47.986827 | orchestrator | 2025-11-23 02:03:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:51.026424 | orchestrator | 2025-11-23 02:03:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:51.026530 | orchestrator | 2025-11-23 02:03:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:54.064393 | orchestrator | 2025-11-23 02:03:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:54.064469 | orchestrator | 2025-11-23 02:03:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:03:57.103840 | orchestrator | 2025-11-23 02:03:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:03:57.103929 | orchestrator | 2025-11-23 02:03:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:00.145983 | orchestrator | 2025-11-23 02:04:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:00.146174 | orchestrator | 2025-11-23 02:04:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:03.183411 | orchestrator | 2025-11-23 02:04:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:03.183483 | orchestrator | 2025-11-23 02:04:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:06.229208 | orchestrator | 2025-11-23 02:04:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:06.229275 | orchestrator | 2025-11-23 02:04:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:09.267850 | orchestrator | 2025-11-23 02:04:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:09.267952 | orchestrator | 2025-11-23 02:04:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:12.303864 | orchestrator | 2025-11-23 02:04:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:12.303964 | orchestrator | 2025-11-23 02:04:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:15.339469 | orchestrator | 2025-11-23 02:04:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:15.339554 | orchestrator | 2025-11-23 02:04:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:18.382695 | orchestrator | 2025-11-23 02:04:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:18.382901 | orchestrator | 2025-11-23 02:04:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:21.423791 | orchestrator | 2025-11-23 02:04:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:21.423886 | orchestrator | 2025-11-23 02:04:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:24.468571 | orchestrator | 2025-11-23 02:04:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:24.468652 | orchestrator | 2025-11-23 02:04:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:27.507924 | orchestrator | 2025-11-23 02:04:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:27.508050 | orchestrator | 2025-11-23 02:04:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:30.547725 | orchestrator | 2025-11-23 02:04:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:30.547919 | orchestrator | 2025-11-23 02:04:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:33.589873 | orchestrator | 2025-11-23 02:04:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:33.589979 | orchestrator | 2025-11-23 02:04:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:36.627784 | orchestrator | 2025-11-23 02:04:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:36.627881 | orchestrator | 2025-11-23 02:04:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:39.671212 | orchestrator | 2025-11-23 02:04:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:39.671420 | orchestrator | 2025-11-23 02:04:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:42.706451 | orchestrator | 2025-11-23 02:04:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:42.706586 | orchestrator | 2025-11-23 02:04:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:45.751044 | orchestrator | 2025-11-23 02:04:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:45.751146 | orchestrator | 2025-11-23 02:04:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:48.794666 | orchestrator | 2025-11-23 02:04:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:48.794750 | orchestrator | 2025-11-23 02:04:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:51.835859 | orchestrator | 2025-11-23 02:04:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:51.835939 | orchestrator | 2025-11-23 02:04:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:54.875129 | orchestrator | 2025-11-23 02:04:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:54.875232 | orchestrator | 2025-11-23 02:04:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:04:57.916875 | orchestrator | 2025-11-23 02:04:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:04:57.916950 | orchestrator | 2025-11-23 02:04:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:00.955994 | orchestrator | 2025-11-23 02:05:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:00.956095 | orchestrator | 2025-11-23 02:05:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:03.997905 | orchestrator | 2025-11-23 02:05:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:03.998074 | orchestrator | 2025-11-23 02:05:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:07.033277 | orchestrator | 2025-11-23 02:05:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:07.033586 | orchestrator | 2025-11-23 02:05:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:10.073386 | orchestrator | 2025-11-23 02:05:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:10.073489 | orchestrator | 2025-11-23 02:05:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:13.110810 | orchestrator | 2025-11-23 02:05:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:13.110945 | orchestrator | 2025-11-23 02:05:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:16.152105 | orchestrator | 2025-11-23 02:05:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:16.152210 | orchestrator | 2025-11-23 02:05:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:19.193461 | orchestrator | 2025-11-23 02:05:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:19.193579 | orchestrator | 2025-11-23 02:05:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:22.230607 | orchestrator | 2025-11-23 02:05:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:22.230717 | orchestrator | 2025-11-23 02:05:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:25.273085 | orchestrator | 2025-11-23 02:05:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:25.273183 | orchestrator | 2025-11-23 02:05:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:28.309956 | orchestrator | 2025-11-23 02:05:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:28.310122 | orchestrator | 2025-11-23 02:05:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:31.343049 | orchestrator | 2025-11-23 02:05:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:31.343149 | orchestrator | 2025-11-23 02:05:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:34.382590 | orchestrator | 2025-11-23 02:05:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:34.382728 | orchestrator | 2025-11-23 02:05:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:37.416190 | orchestrator | 2025-11-23 02:05:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:37.416380 | orchestrator | 2025-11-23 02:05:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:40.455960 | orchestrator | 2025-11-23 02:05:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:40.456065 | orchestrator | 2025-11-23 02:05:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:43.496383 | orchestrator | 2025-11-23 02:05:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:43.496556 | orchestrator | 2025-11-23 02:05:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:46.542823 | orchestrator | 2025-11-23 02:05:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:46.542922 | orchestrator | 2025-11-23 02:05:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:49.583275 | orchestrator | 2025-11-23 02:05:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:49.583455 | orchestrator | 2025-11-23 02:05:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:52.615987 | orchestrator | 2025-11-23 02:05:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:52.616103 | orchestrator | 2025-11-23 02:05:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:55.656260 | orchestrator | 2025-11-23 02:05:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:55.656429 | orchestrator | 2025-11-23 02:05:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:05:58.698870 | orchestrator | 2025-11-23 02:05:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:05:58.699010 | orchestrator | 2025-11-23 02:05:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:01.739434 | orchestrator | 2025-11-23 02:06:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:01.739536 | orchestrator | 2025-11-23 02:06:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:04.777621 | orchestrator | 2025-11-23 02:06:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:04.777715 | orchestrator | 2025-11-23 02:06:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:07.818673 | orchestrator | 2025-11-23 02:06:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:07.818784 | orchestrator | 2025-11-23 02:06:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:10.861965 | orchestrator | 2025-11-23 02:06:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:10.862124 | orchestrator | 2025-11-23 02:06:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:13.903043 | orchestrator | 2025-11-23 02:06:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:13.903162 | orchestrator | 2025-11-23 02:06:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:16.943262 | orchestrator | 2025-11-23 02:06:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:16.943423 | orchestrator | 2025-11-23 02:06:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:19.981237 | orchestrator | 2025-11-23 02:06:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:19.981419 | orchestrator | 2025-11-23 02:06:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:23.020402 | orchestrator | 2025-11-23 02:06:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:23.020509 | orchestrator | 2025-11-23 02:06:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:26.060916 | orchestrator | 2025-11-23 02:06:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:26.061017 | orchestrator | 2025-11-23 02:06:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:29.102447 | orchestrator | 2025-11-23 02:06:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:29.102538 | orchestrator | 2025-11-23 02:06:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:32.142585 | orchestrator | 2025-11-23 02:06:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:32.142709 | orchestrator | 2025-11-23 02:06:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:35.179054 | orchestrator | 2025-11-23 02:06:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:35.179159 | orchestrator | 2025-11-23 02:06:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:38.214089 | orchestrator | 2025-11-23 02:06:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:38.214190 | orchestrator | 2025-11-23 02:06:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:41.258639 | orchestrator | 2025-11-23 02:06:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:41.258749 | orchestrator | 2025-11-23 02:06:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:44.298539 | orchestrator | 2025-11-23 02:06:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:44.298687 | orchestrator | 2025-11-23 02:06:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:47.342203 | orchestrator | 2025-11-23 02:06:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:47.342310 | orchestrator | 2025-11-23 02:06:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:50.384983 | orchestrator | 2025-11-23 02:06:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:50.385209 | orchestrator | 2025-11-23 02:06:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:53.428100 | orchestrator | 2025-11-23 02:06:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:53.428215 | orchestrator | 2025-11-23 02:06:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:56.475050 | orchestrator | 2025-11-23 02:06:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:56.475129 | orchestrator | 2025-11-23 02:06:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:06:59.514864 | orchestrator | 2025-11-23 02:06:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:06:59.514950 | orchestrator | 2025-11-23 02:06:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:02.554400 | orchestrator | 2025-11-23 02:07:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:02.554577 | orchestrator | 2025-11-23 02:07:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:05.592081 | orchestrator | 2025-11-23 02:07:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:05.592188 | orchestrator | 2025-11-23 02:07:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:08.635804 | orchestrator | 2025-11-23 02:07:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:08.635906 | orchestrator | 2025-11-23 02:07:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:11.676807 | orchestrator | 2025-11-23 02:07:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:11.676910 | orchestrator | 2025-11-23 02:07:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:14.717136 | orchestrator | 2025-11-23 02:07:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:14.717232 | orchestrator | 2025-11-23 02:07:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:17.757871 | orchestrator | 2025-11-23 02:07:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:17.757970 | orchestrator | 2025-11-23 02:07:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:20.800170 | orchestrator | 2025-11-23 02:07:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:20.800275 | orchestrator | 2025-11-23 02:07:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:23.841829 | orchestrator | 2025-11-23 02:07:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:23.841960 | orchestrator | 2025-11-23 02:07:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:26.881321 | orchestrator | 2025-11-23 02:07:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:26.881480 | orchestrator | 2025-11-23 02:07:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:29.922151 | orchestrator | 2025-11-23 02:07:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:29.922486 | orchestrator | 2025-11-23 02:07:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:32.962630 | orchestrator | 2025-11-23 02:07:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:32.962730 | orchestrator | 2025-11-23 02:07:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:36.000791 | orchestrator | 2025-11-23 02:07:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:36.001070 | orchestrator | 2025-11-23 02:07:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:39.033151 | orchestrator | 2025-11-23 02:07:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:39.033238 | orchestrator | 2025-11-23 02:07:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:42.073089 | orchestrator | 2025-11-23 02:07:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:42.073202 | orchestrator | 2025-11-23 02:07:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:45.112941 | orchestrator | 2025-11-23 02:07:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:45.113033 | orchestrator | 2025-11-23 02:07:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:48.151929 | orchestrator | 2025-11-23 02:07:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:48.152061 | orchestrator | 2025-11-23 02:07:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:51.192875 | orchestrator | 2025-11-23 02:07:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:51.192949 | orchestrator | 2025-11-23 02:07:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:54.233863 | orchestrator | 2025-11-23 02:07:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:54.233993 | orchestrator | 2025-11-23 02:07:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:07:57.269231 | orchestrator | 2025-11-23 02:07:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:07:57.269338 | orchestrator | 2025-11-23 02:07:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:00.316833 | orchestrator | 2025-11-23 02:08:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:00.317055 | orchestrator | 2025-11-23 02:08:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:03.367526 | orchestrator | 2025-11-23 02:08:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:03.367634 | orchestrator | 2025-11-23 02:08:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:06.409498 | orchestrator | 2025-11-23 02:08:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:06.409602 | orchestrator | 2025-11-23 02:08:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:09.446257 | orchestrator | 2025-11-23 02:08:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:09.446334 | orchestrator | 2025-11-23 02:08:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:12.473879 | orchestrator | 2025-11-23 02:08:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:12.474088 | orchestrator | 2025-11-23 02:08:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:15.513524 | orchestrator | 2025-11-23 02:08:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:15.513622 | orchestrator | 2025-11-23 02:08:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:18.554609 | orchestrator | 2025-11-23 02:08:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:18.554713 | orchestrator | 2025-11-23 02:08:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:21.589606 | orchestrator | 2025-11-23 02:08:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:21.589718 | orchestrator | 2025-11-23 02:08:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:24.632459 | orchestrator | 2025-11-23 02:08:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:24.632563 | orchestrator | 2025-11-23 02:08:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:27.679752 | orchestrator | 2025-11-23 02:08:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:27.679855 | orchestrator | 2025-11-23 02:08:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:30.720848 | orchestrator | 2025-11-23 02:08:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:30.720966 | orchestrator | 2025-11-23 02:08:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:33.765174 | orchestrator | 2025-11-23 02:08:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:33.765302 | orchestrator | 2025-11-23 02:08:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:36.804765 | orchestrator | 2025-11-23 02:08:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:36.804837 | orchestrator | 2025-11-23 02:08:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:39.844740 | orchestrator | 2025-11-23 02:08:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:39.844860 | orchestrator | 2025-11-23 02:08:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:42.886514 | orchestrator | 2025-11-23 02:08:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:42.886616 | orchestrator | 2025-11-23 02:08:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:45.925676 | orchestrator | 2025-11-23 02:08:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:45.925765 | orchestrator | 2025-11-23 02:08:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:48.964432 | orchestrator | 2025-11-23 02:08:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:48.964535 | orchestrator | 2025-11-23 02:08:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:52.004602 | orchestrator | 2025-11-23 02:08:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:52.004701 | orchestrator | 2025-11-23 02:08:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:55.046874 | orchestrator | 2025-11-23 02:08:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:55.046967 | orchestrator | 2025-11-23 02:08:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:08:58.092650 | orchestrator | 2025-11-23 02:08:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:08:58.092734 | orchestrator | 2025-11-23 02:08:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:01.134319 | orchestrator | 2025-11-23 02:09:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:01.134448 | orchestrator | 2025-11-23 02:09:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:04.176847 | orchestrator | 2025-11-23 02:09:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:04.176949 | orchestrator | 2025-11-23 02:09:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:07.212948 | orchestrator | 2025-11-23 02:09:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:07.213054 | orchestrator | 2025-11-23 02:09:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:10.249654 | orchestrator | 2025-11-23 02:09:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:10.249740 | orchestrator | 2025-11-23 02:09:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:13.295918 | orchestrator | 2025-11-23 02:09:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:13.296030 | orchestrator | 2025-11-23 02:09:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:16.335538 | orchestrator | 2025-11-23 02:09:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:16.335636 | orchestrator | 2025-11-23 02:09:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:19.374815 | orchestrator | 2025-11-23 02:09:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:19.374919 | orchestrator | 2025-11-23 02:09:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:22.409173 | orchestrator | 2025-11-23 02:09:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:22.409250 | orchestrator | 2025-11-23 02:09:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:25.452147 | orchestrator | 2025-11-23 02:09:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:25.452249 | orchestrator | 2025-11-23 02:09:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:28.494868 | orchestrator | 2025-11-23 02:09:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:28.494979 | orchestrator | 2025-11-23 02:09:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:31.530304 | orchestrator | 2025-11-23 02:09:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:31.530460 | orchestrator | 2025-11-23 02:09:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:34.572940 | orchestrator | 2025-11-23 02:09:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:34.573049 | orchestrator | 2025-11-23 02:09:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:37.607683 | orchestrator | 2025-11-23 02:09:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:37.607784 | orchestrator | 2025-11-23 02:09:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:40.645017 | orchestrator | 2025-11-23 02:09:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:40.645117 | orchestrator | 2025-11-23 02:09:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:43.684130 | orchestrator | 2025-11-23 02:09:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:43.684239 | orchestrator | 2025-11-23 02:09:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:46.719779 | orchestrator | 2025-11-23 02:09:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:46.719899 | orchestrator | 2025-11-23 02:09:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:49.755007 | orchestrator | 2025-11-23 02:09:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:49.755102 | orchestrator | 2025-11-23 02:09:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:52.798251 | orchestrator | 2025-11-23 02:09:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:52.798336 | orchestrator | 2025-11-23 02:09:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:55.841254 | orchestrator | 2025-11-23 02:09:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:55.841423 | orchestrator | 2025-11-23 02:09:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:09:58.886157 | orchestrator | 2025-11-23 02:09:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:09:58.886259 | orchestrator | 2025-11-23 02:09:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:01.924349 | orchestrator | 2025-11-23 02:10:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:01.924510 | orchestrator | 2025-11-23 02:10:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:04.965481 | orchestrator | 2025-11-23 02:10:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:04.965571 | orchestrator | 2025-11-23 02:10:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:08.003649 | orchestrator | 2025-11-23 02:10:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:08.003757 | orchestrator | 2025-11-23 02:10:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:11.040630 | orchestrator | 2025-11-23 02:10:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:11.040726 | orchestrator | 2025-11-23 02:10:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:14.078199 | orchestrator | 2025-11-23 02:10:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:14.078323 | orchestrator | 2025-11-23 02:10:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:17.124228 | orchestrator | 2025-11-23 02:10:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:17.124351 | orchestrator | 2025-11-23 02:10:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:20.166267 | orchestrator | 2025-11-23 02:10:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:20.166407 | orchestrator | 2025-11-23 02:10:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:23.205254 | orchestrator | 2025-11-23 02:10:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:23.205490 | orchestrator | 2025-11-23 02:10:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:26.237218 | orchestrator | 2025-11-23 02:10:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:26.237320 | orchestrator | 2025-11-23 02:10:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:29.279772 | orchestrator | 2025-11-23 02:10:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:29.279874 | orchestrator | 2025-11-23 02:10:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:32.323062 | orchestrator | 2025-11-23 02:10:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:32.323167 | orchestrator | 2025-11-23 02:10:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:35.363843 | orchestrator | 2025-11-23 02:10:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:35.363948 | orchestrator | 2025-11-23 02:10:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:38.401218 | orchestrator | 2025-11-23 02:10:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:38.401332 | orchestrator | 2025-11-23 02:10:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:41.438312 | orchestrator | 2025-11-23 02:10:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:41.438444 | orchestrator | 2025-11-23 02:10:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:44.480806 | orchestrator | 2025-11-23 02:10:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:44.480893 | orchestrator | 2025-11-23 02:10:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:47.517158 | orchestrator | 2025-11-23 02:10:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:47.517257 | orchestrator | 2025-11-23 02:10:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:50.555581 | orchestrator | 2025-11-23 02:10:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:50.555699 | orchestrator | 2025-11-23 02:10:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:53.597776 | orchestrator | 2025-11-23 02:10:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:53.597866 | orchestrator | 2025-11-23 02:10:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:56.636026 | orchestrator | 2025-11-23 02:10:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:56.636160 | orchestrator | 2025-11-23 02:10:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:10:59.671747 | orchestrator | 2025-11-23 02:10:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:10:59.671847 | orchestrator | 2025-11-23 02:10:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:02.708871 | orchestrator | 2025-11-23 02:11:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:02.708977 | orchestrator | 2025-11-23 02:11:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:05.753362 | orchestrator | 2025-11-23 02:11:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:05.753513 | orchestrator | 2025-11-23 02:11:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:08.792887 | orchestrator | 2025-11-23 02:11:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:08.792981 | orchestrator | 2025-11-23 02:11:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:11.833932 | orchestrator | 2025-11-23 02:11:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:11.834007 | orchestrator | 2025-11-23 02:11:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:14.871472 | orchestrator | 2025-11-23 02:11:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:14.871575 | orchestrator | 2025-11-23 02:11:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:17.917714 | orchestrator | 2025-11-23 02:11:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:17.917816 | orchestrator | 2025-11-23 02:11:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:20.964853 | orchestrator | 2025-11-23 02:11:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:20.964949 | orchestrator | 2025-11-23 02:11:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:24.008784 | orchestrator | 2025-11-23 02:11:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:24.008888 | orchestrator | 2025-11-23 02:11:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:27.048098 | orchestrator | 2025-11-23 02:11:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:27.048195 | orchestrator | 2025-11-23 02:11:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:30.085848 | orchestrator | 2025-11-23 02:11:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:30.085947 | orchestrator | 2025-11-23 02:11:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:33.124224 | orchestrator | 2025-11-23 02:11:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:33.124322 | orchestrator | 2025-11-23 02:11:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:36.169071 | orchestrator | 2025-11-23 02:11:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:36.169177 | orchestrator | 2025-11-23 02:11:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:39.209987 | orchestrator | 2025-11-23 02:11:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:39.210997 | orchestrator | 2025-11-23 02:11:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:42.252369 | orchestrator | 2025-11-23 02:11:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:42.252532 | orchestrator | 2025-11-23 02:11:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:45.289088 | orchestrator | 2025-11-23 02:11:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:45.289310 | orchestrator | 2025-11-23 02:11:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:48.328322 | orchestrator | 2025-11-23 02:11:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:48.328494 | orchestrator | 2025-11-23 02:11:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:51.365651 | orchestrator | 2025-11-23 02:11:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:51.365757 | orchestrator | 2025-11-23 02:11:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:54.406544 | orchestrator | 2025-11-23 02:11:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:54.406647 | orchestrator | 2025-11-23 02:11:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:11:57.443692 | orchestrator | 2025-11-23 02:11:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:11:57.443802 | orchestrator | 2025-11-23 02:11:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:00.481748 | orchestrator | 2025-11-23 02:12:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:00.481847 | orchestrator | 2025-11-23 02:12:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:03.527237 | orchestrator | 2025-11-23 02:12:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:03.527327 | orchestrator | 2025-11-23 02:12:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:06.561603 | orchestrator | 2025-11-23 02:12:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:06.561672 | orchestrator | 2025-11-23 02:12:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:09.603296 | orchestrator | 2025-11-23 02:12:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:09.603447 | orchestrator | 2025-11-23 02:12:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:12.643376 | orchestrator | 2025-11-23 02:12:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:12.643535 | orchestrator | 2025-11-23 02:12:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:15.684501 | orchestrator | 2025-11-23 02:12:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:15.684609 | orchestrator | 2025-11-23 02:12:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:18.732008 | orchestrator | 2025-11-23 02:12:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:18.732137 | orchestrator | 2025-11-23 02:12:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:21.767206 | orchestrator | 2025-11-23 02:12:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:21.767299 | orchestrator | 2025-11-23 02:12:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:24.804029 | orchestrator | 2025-11-23 02:12:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:24.804122 | orchestrator | 2025-11-23 02:12:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:27.846260 | orchestrator | 2025-11-23 02:12:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:27.846373 | orchestrator | 2025-11-23 02:12:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:30.893689 | orchestrator | 2025-11-23 02:12:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:30.893832 | orchestrator | 2025-11-23 02:12:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:33.932134 | orchestrator | 2025-11-23 02:12:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:33.932267 | orchestrator | 2025-11-23 02:12:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:36.973105 | orchestrator | 2025-11-23 02:12:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:36.973207 | orchestrator | 2025-11-23 02:12:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:40.012251 | orchestrator | 2025-11-23 02:12:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:40.012353 | orchestrator | 2025-11-23 02:12:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:43.049982 | orchestrator | 2025-11-23 02:12:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:43.050154 | orchestrator | 2025-11-23 02:12:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:46.088544 | orchestrator | 2025-11-23 02:12:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:46.088648 | orchestrator | 2025-11-23 02:12:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:49.130327 | orchestrator | 2025-11-23 02:12:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:49.130506 | orchestrator | 2025-11-23 02:12:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:52.163608 | orchestrator | 2025-11-23 02:12:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:52.163703 | orchestrator | 2025-11-23 02:12:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:55.198362 | orchestrator | 2025-11-23 02:12:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:55.198449 | orchestrator | 2025-11-23 02:12:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:12:58.240479 | orchestrator | 2025-11-23 02:12:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:12:58.240575 | orchestrator | 2025-11-23 02:12:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:01.279098 | orchestrator | 2025-11-23 02:13:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:01.279201 | orchestrator | 2025-11-23 02:13:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:04.321696 | orchestrator | 2025-11-23 02:13:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:04.321795 | orchestrator | 2025-11-23 02:13:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:07.361050 | orchestrator | 2025-11-23 02:13:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:07.361219 | orchestrator | 2025-11-23 02:13:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:10.400665 | orchestrator | 2025-11-23 02:13:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:10.400763 | orchestrator | 2025-11-23 02:13:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:13.440564 | orchestrator | 2025-11-23 02:13:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:13.441016 | orchestrator | 2025-11-23 02:13:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:16.468935 | orchestrator | 2025-11-23 02:13:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:16.469040 | orchestrator | 2025-11-23 02:13:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:19.515531 | orchestrator | 2025-11-23 02:13:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:19.515651 | orchestrator | 2025-11-23 02:13:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:22.552901 | orchestrator | 2025-11-23 02:13:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:22.553028 | orchestrator | 2025-11-23 02:13:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:25.593138 | orchestrator | 2025-11-23 02:13:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:25.593239 | orchestrator | 2025-11-23 02:13:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:28.635539 | orchestrator | 2025-11-23 02:13:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:28.635640 | orchestrator | 2025-11-23 02:13:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:31.672148 | orchestrator | 2025-11-23 02:13:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:31.672278 | orchestrator | 2025-11-23 02:13:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:34.712198 | orchestrator | 2025-11-23 02:13:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:34.712300 | orchestrator | 2025-11-23 02:13:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:37.754107 | orchestrator | 2025-11-23 02:13:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:37.754206 | orchestrator | 2025-11-23 02:13:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:40.792465 | orchestrator | 2025-11-23 02:13:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:40.792579 | orchestrator | 2025-11-23 02:13:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:43.835195 | orchestrator | 2025-11-23 02:13:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:43.835296 | orchestrator | 2025-11-23 02:13:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:46.875685 | orchestrator | 2025-11-23 02:13:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:46.875797 | orchestrator | 2025-11-23 02:13:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:49.913240 | orchestrator | 2025-11-23 02:13:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:49.913343 | orchestrator | 2025-11-23 02:13:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:52.957001 | orchestrator | 2025-11-23 02:13:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:52.957085 | orchestrator | 2025-11-23 02:13:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:55.995045 | orchestrator | 2025-11-23 02:13:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:55.995178 | orchestrator | 2025-11-23 02:13:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:13:59.036012 | orchestrator | 2025-11-23 02:13:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:13:59.036113 | orchestrator | 2025-11-23 02:13:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:02.076916 | orchestrator | 2025-11-23 02:14:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:02.076993 | orchestrator | 2025-11-23 02:14:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:05.117050 | orchestrator | 2025-11-23 02:14:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:05.117150 | orchestrator | 2025-11-23 02:14:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:08.154412 | orchestrator | 2025-11-23 02:14:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:08.154575 | orchestrator | 2025-11-23 02:14:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:11.190743 | orchestrator | 2025-11-23 02:14:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:11.190841 | orchestrator | 2025-11-23 02:14:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:14.228487 | orchestrator | 2025-11-23 02:14:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:14.228609 | orchestrator | 2025-11-23 02:14:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:17.270675 | orchestrator | 2025-11-23 02:14:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:17.270765 | orchestrator | 2025-11-23 02:14:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:20.311503 | orchestrator | 2025-11-23 02:14:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:20.311592 | orchestrator | 2025-11-23 02:14:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:23.354089 | orchestrator | 2025-11-23 02:14:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:23.354165 | orchestrator | 2025-11-23 02:14:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:26.395406 | orchestrator | 2025-11-23 02:14:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:26.395577 | orchestrator | 2025-11-23 02:14:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:29.435180 | orchestrator | 2025-11-23 02:14:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:29.435285 | orchestrator | 2025-11-23 02:14:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:32.472580 | orchestrator | 2025-11-23 02:14:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:32.472682 | orchestrator | 2025-11-23 02:14:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:35.512339 | orchestrator | 2025-11-23 02:14:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:35.512510 | orchestrator | 2025-11-23 02:14:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:38.553428 | orchestrator | 2025-11-23 02:14:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:38.553592 | orchestrator | 2025-11-23 02:14:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:41.582347 | orchestrator | 2025-11-23 02:14:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:41.582558 | orchestrator | 2025-11-23 02:14:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:44.621099 | orchestrator | 2025-11-23 02:14:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:44.621236 | orchestrator | 2025-11-23 02:14:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:47.657701 | orchestrator | 2025-11-23 02:14:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:47.657802 | orchestrator | 2025-11-23 02:14:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:50.698565 | orchestrator | 2025-11-23 02:14:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:50.698646 | orchestrator | 2025-11-23 02:14:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:53.738842 | orchestrator | 2025-11-23 02:14:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:53.738949 | orchestrator | 2025-11-23 02:14:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:56.784409 | orchestrator | 2025-11-23 02:14:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:56.784554 | orchestrator | 2025-11-23 02:14:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:14:59.824189 | orchestrator | 2025-11-23 02:14:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:14:59.824305 | orchestrator | 2025-11-23 02:14:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:02.867050 | orchestrator | 2025-11-23 02:15:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:02.867150 | orchestrator | 2025-11-23 02:15:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:05.907043 | orchestrator | 2025-11-23 02:15:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:05.907143 | orchestrator | 2025-11-23 02:15:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:08.943045 | orchestrator | 2025-11-23 02:15:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:08.943148 | orchestrator | 2025-11-23 02:15:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:11.989024 | orchestrator | 2025-11-23 02:15:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:11.989112 | orchestrator | 2025-11-23 02:15:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:15.034330 | orchestrator | 2025-11-23 02:15:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:15.034517 | orchestrator | 2025-11-23 02:15:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:18.069137 | orchestrator | 2025-11-23 02:15:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:18.069259 | orchestrator | 2025-11-23 02:15:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:21.107533 | orchestrator | 2025-11-23 02:15:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:21.107653 | orchestrator | 2025-11-23 02:15:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:24.150314 | orchestrator | 2025-11-23 02:15:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:24.150428 | orchestrator | 2025-11-23 02:15:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:27.188318 | orchestrator | 2025-11-23 02:15:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:27.188487 | orchestrator | 2025-11-23 02:15:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:30.230191 | orchestrator | 2025-11-23 02:15:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:30.230316 | orchestrator | 2025-11-23 02:15:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:33.270799 | orchestrator | 2025-11-23 02:15:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:33.270905 | orchestrator | 2025-11-23 02:15:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:36.313662 | orchestrator | 2025-11-23 02:15:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:36.313761 | orchestrator | 2025-11-23 02:15:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:39.352731 | orchestrator | 2025-11-23 02:15:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:39.352859 | orchestrator | 2025-11-23 02:15:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:42.384935 | orchestrator | 2025-11-23 02:15:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:42.385038 | orchestrator | 2025-11-23 02:15:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:45.425914 | orchestrator | 2025-11-23 02:15:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:45.426227 | orchestrator | 2025-11-23 02:15:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:48.459562 | orchestrator | 2025-11-23 02:15:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:48.459691 | orchestrator | 2025-11-23 02:15:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:51.502161 | orchestrator | 2025-11-23 02:15:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:51.502318 | orchestrator | 2025-11-23 02:15:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:54.545698 | orchestrator | 2025-11-23 02:15:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:54.545808 | orchestrator | 2025-11-23 02:15:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:15:57.588247 | orchestrator | 2025-11-23 02:15:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:15:57.588414 | orchestrator | 2025-11-23 02:15:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:00.630700 | orchestrator | 2025-11-23 02:16:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:00.630799 | orchestrator | 2025-11-23 02:16:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:03.671338 | orchestrator | 2025-11-23 02:16:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:03.671522 | orchestrator | 2025-11-23 02:16:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:06.716162 | orchestrator | 2025-11-23 02:16:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:06.716266 | orchestrator | 2025-11-23 02:16:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:09.754346 | orchestrator | 2025-11-23 02:16:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:09.754592 | orchestrator | 2025-11-23 02:16:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:12.791006 | orchestrator | 2025-11-23 02:16:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:12.791108 | orchestrator | 2025-11-23 02:16:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:15.831379 | orchestrator | 2025-11-23 02:16:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:15.831563 | orchestrator | 2025-11-23 02:16:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:18.872432 | orchestrator | 2025-11-23 02:16:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:18.872578 | orchestrator | 2025-11-23 02:16:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:21.913047 | orchestrator | 2025-11-23 02:16:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:21.913167 | orchestrator | 2025-11-23 02:16:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:24.951881 | orchestrator | 2025-11-23 02:16:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:24.951979 | orchestrator | 2025-11-23 02:16:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:27.995713 | orchestrator | 2025-11-23 02:16:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:27.995819 | orchestrator | 2025-11-23 02:16:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:31.038518 | orchestrator | 2025-11-23 02:16:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:31.038600 | orchestrator | 2025-11-23 02:16:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:34.085938 | orchestrator | 2025-11-23 02:16:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:34.086063 | orchestrator | 2025-11-23 02:16:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:37.123055 | orchestrator | 2025-11-23 02:16:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:37.123152 | orchestrator | 2025-11-23 02:16:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:40.165054 | orchestrator | 2025-11-23 02:16:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:40.165191 | orchestrator | 2025-11-23 02:16:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:43.202874 | orchestrator | 2025-11-23 02:16:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:43.202976 | orchestrator | 2025-11-23 02:16:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:46.240457 | orchestrator | 2025-11-23 02:16:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:46.240597 | orchestrator | 2025-11-23 02:16:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:49.280914 | orchestrator | 2025-11-23 02:16:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:49.281014 | orchestrator | 2025-11-23 02:16:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:52.321079 | orchestrator | 2025-11-23 02:16:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:52.321177 | orchestrator | 2025-11-23 02:16:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:55.361342 | orchestrator | 2025-11-23 02:16:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:55.361442 | orchestrator | 2025-11-23 02:16:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:16:58.405275 | orchestrator | 2025-11-23 02:16:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:16:58.405372 | orchestrator | 2025-11-23 02:16:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:01.444266 | orchestrator | 2025-11-23 02:17:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:01.444367 | orchestrator | 2025-11-23 02:17:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:04.486350 | orchestrator | 2025-11-23 02:17:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:04.486460 | orchestrator | 2025-11-23 02:17:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:07.519376 | orchestrator | 2025-11-23 02:17:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:07.519563 | orchestrator | 2025-11-23 02:17:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:10.562271 | orchestrator | 2025-11-23 02:17:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:10.562367 | orchestrator | 2025-11-23 02:17:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:13.601794 | orchestrator | 2025-11-23 02:17:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:13.601887 | orchestrator | 2025-11-23 02:17:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:16.640900 | orchestrator | 2025-11-23 02:17:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:16.640979 | orchestrator | 2025-11-23 02:17:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:19.684012 | orchestrator | 2025-11-23 02:17:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:19.684147 | orchestrator | 2025-11-23 02:17:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:22.725303 | orchestrator | 2025-11-23 02:17:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:22.725391 | orchestrator | 2025-11-23 02:17:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:25.766290 | orchestrator | 2025-11-23 02:17:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:25.766393 | orchestrator | 2025-11-23 02:17:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:28.809690 | orchestrator | 2025-11-23 02:17:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:28.809795 | orchestrator | 2025-11-23 02:17:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:31.848863 | orchestrator | 2025-11-23 02:17:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:31.848970 | orchestrator | 2025-11-23 02:17:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:34.889191 | orchestrator | 2025-11-23 02:17:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:34.889310 | orchestrator | 2025-11-23 02:17:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:37.938215 | orchestrator | 2025-11-23 02:17:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:37.938311 | orchestrator | 2025-11-23 02:17:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:40.984450 | orchestrator | 2025-11-23 02:17:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:40.984641 | orchestrator | 2025-11-23 02:17:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:44.024364 | orchestrator | 2025-11-23 02:17:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:44.024453 | orchestrator | 2025-11-23 02:17:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:47.058477 | orchestrator | 2025-11-23 02:17:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:47.058628 | orchestrator | 2025-11-23 02:17:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:50.100107 | orchestrator | 2025-11-23 02:17:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:50.100295 | orchestrator | 2025-11-23 02:17:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:53.137589 | orchestrator | 2025-11-23 02:17:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:53.137697 | orchestrator | 2025-11-23 02:17:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:56.179655 | orchestrator | 2025-11-23 02:17:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:56.179756 | orchestrator | 2025-11-23 02:17:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:17:59.224983 | orchestrator | 2025-11-23 02:17:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:17:59.225090 | orchestrator | 2025-11-23 02:17:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:02.261894 | orchestrator | 2025-11-23 02:18:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:02.261996 | orchestrator | 2025-11-23 02:18:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:05.303076 | orchestrator | 2025-11-23 02:18:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:05.303197 | orchestrator | 2025-11-23 02:18:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:08.345315 | orchestrator | 2025-11-23 02:18:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:08.345412 | orchestrator | 2025-11-23 02:18:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:11.388215 | orchestrator | 2025-11-23 02:18:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:11.388317 | orchestrator | 2025-11-23 02:18:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:14.432472 | orchestrator | 2025-11-23 02:18:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:14.432607 | orchestrator | 2025-11-23 02:18:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:17.469347 | orchestrator | 2025-11-23 02:18:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:17.469442 | orchestrator | 2025-11-23 02:18:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:20.508252 | orchestrator | 2025-11-23 02:18:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:20.508358 | orchestrator | 2025-11-23 02:18:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:23.549904 | orchestrator | 2025-11-23 02:18:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:23.550119 | orchestrator | 2025-11-23 02:18:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:26.590832 | orchestrator | 2025-11-23 02:18:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:26.590919 | orchestrator | 2025-11-23 02:18:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:29.635631 | orchestrator | 2025-11-23 02:18:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:29.635733 | orchestrator | 2025-11-23 02:18:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:32.673161 | orchestrator | 2025-11-23 02:18:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:32.673287 | orchestrator | 2025-11-23 02:18:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:35.711373 | orchestrator | 2025-11-23 02:18:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:35.711470 | orchestrator | 2025-11-23 02:18:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:38.753817 | orchestrator | 2025-11-23 02:18:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:38.753921 | orchestrator | 2025-11-23 02:18:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:41.795450 | orchestrator | 2025-11-23 02:18:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:41.795613 | orchestrator | 2025-11-23 02:18:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:44.836082 | orchestrator | 2025-11-23 02:18:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:44.836175 | orchestrator | 2025-11-23 02:18:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:47.877205 | orchestrator | 2025-11-23 02:18:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:47.877314 | orchestrator | 2025-11-23 02:18:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:50.915784 | orchestrator | 2025-11-23 02:18:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:50.915973 | orchestrator | 2025-11-23 02:18:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:53.956926 | orchestrator | 2025-11-23 02:18:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:53.957024 | orchestrator | 2025-11-23 02:18:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:18:56.998570 | orchestrator | 2025-11-23 02:18:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:18:56.998675 | orchestrator | 2025-11-23 02:18:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:00.041531 | orchestrator | 2025-11-23 02:19:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:00.041605 | orchestrator | 2025-11-23 02:19:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:03.079963 | orchestrator | 2025-11-23 02:19:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:03.080062 | orchestrator | 2025-11-23 02:19:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:06.115945 | orchestrator | 2025-11-23 02:19:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:06.116044 | orchestrator | 2025-11-23 02:19:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:09.159438 | orchestrator | 2025-11-23 02:19:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:09.159573 | orchestrator | 2025-11-23 02:19:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:12.196663 | orchestrator | 2025-11-23 02:19:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:12.196729 | orchestrator | 2025-11-23 02:19:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:15.234686 | orchestrator | 2025-11-23 02:19:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:15.234757 | orchestrator | 2025-11-23 02:19:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:18.278284 | orchestrator | 2025-11-23 02:19:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:18.278386 | orchestrator | 2025-11-23 02:19:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:21.319704 | orchestrator | 2025-11-23 02:19:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:21.319829 | orchestrator | 2025-11-23 02:19:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:24.357188 | orchestrator | 2025-11-23 02:19:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:24.357274 | orchestrator | 2025-11-23 02:19:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:27.392603 | orchestrator | 2025-11-23 02:19:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:27.392699 | orchestrator | 2025-11-23 02:19:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:30.430396 | orchestrator | 2025-11-23 02:19:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:30.430585 | orchestrator | 2025-11-23 02:19:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:33.468785 | orchestrator | 2025-11-23 02:19:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:33.468891 | orchestrator | 2025-11-23 02:19:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:36.503957 | orchestrator | 2025-11-23 02:19:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:36.504087 | orchestrator | 2025-11-23 02:19:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:39.544210 | orchestrator | 2025-11-23 02:19:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:39.544341 | orchestrator | 2025-11-23 02:19:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:42.583255 | orchestrator | 2025-11-23 02:19:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:42.583354 | orchestrator | 2025-11-23 02:19:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:45.623282 | orchestrator | 2025-11-23 02:19:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:45.623434 | orchestrator | 2025-11-23 02:19:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:48.666872 | orchestrator | 2025-11-23 02:19:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:48.666973 | orchestrator | 2025-11-23 02:19:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:51.706550 | orchestrator | 2025-11-23 02:19:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:51.706646 | orchestrator | 2025-11-23 02:19:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:54.751208 | orchestrator | 2025-11-23 02:19:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:54.751277 | orchestrator | 2025-11-23 02:19:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:19:57.791096 | orchestrator | 2025-11-23 02:19:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:19:57.791212 | orchestrator | 2025-11-23 02:19:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:00.829838 | orchestrator | 2025-11-23 02:20:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:00.829970 | orchestrator | 2025-11-23 02:20:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:03.871042 | orchestrator | 2025-11-23 02:20:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:03.871145 | orchestrator | 2025-11-23 02:20:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:06.907425 | orchestrator | 2025-11-23 02:20:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:06.907545 | orchestrator | 2025-11-23 02:20:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:09.946788 | orchestrator | 2025-11-23 02:20:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:09.946896 | orchestrator | 2025-11-23 02:20:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:12.988729 | orchestrator | 2025-11-23 02:20:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:12.988841 | orchestrator | 2025-11-23 02:20:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:16.030944 | orchestrator | 2025-11-23 02:20:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:16.031122 | orchestrator | 2025-11-23 02:20:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:19.077039 | orchestrator | 2025-11-23 02:20:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:19.077144 | orchestrator | 2025-11-23 02:20:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:22.104652 | orchestrator | 2025-11-23 02:20:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:22.104751 | orchestrator | 2025-11-23 02:20:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:25.143991 | orchestrator | 2025-11-23 02:20:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:25.144092 | orchestrator | 2025-11-23 02:20:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:28.183170 | orchestrator | 2025-11-23 02:20:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:28.183275 | orchestrator | 2025-11-23 02:20:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:31.230589 | orchestrator | 2025-11-23 02:20:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:31.230691 | orchestrator | 2025-11-23 02:20:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:34.271904 | orchestrator | 2025-11-23 02:20:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:34.272009 | orchestrator | 2025-11-23 02:20:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:37.305955 | orchestrator | 2025-11-23 02:20:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:37.306226 | orchestrator | 2025-11-23 02:20:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:40.346559 | orchestrator | 2025-11-23 02:20:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:40.346634 | orchestrator | 2025-11-23 02:20:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:43.386560 | orchestrator | 2025-11-23 02:20:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:43.386685 | orchestrator | 2025-11-23 02:20:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:46.425412 | orchestrator | 2025-11-23 02:20:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:46.425516 | orchestrator | 2025-11-23 02:20:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:49.469218 | orchestrator | 2025-11-23 02:20:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:49.469387 | orchestrator | 2025-11-23 02:20:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:52.506349 | orchestrator | 2025-11-23 02:20:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:52.506466 | orchestrator | 2025-11-23 02:20:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:55.545086 | orchestrator | 2025-11-23 02:20:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:55.545189 | orchestrator | 2025-11-23 02:20:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:20:58.587402 | orchestrator | 2025-11-23 02:20:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:20:58.588291 | orchestrator | 2025-11-23 02:20:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:01.626495 | orchestrator | 2025-11-23 02:21:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:01.626594 | orchestrator | 2025-11-23 02:21:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:04.667923 | orchestrator | 2025-11-23 02:21:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:04.668047 | orchestrator | 2025-11-23 02:21:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:07.706338 | orchestrator | 2025-11-23 02:21:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:07.706453 | orchestrator | 2025-11-23 02:21:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:10.747956 | orchestrator | 2025-11-23 02:21:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:10.748067 | orchestrator | 2025-11-23 02:21:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:13.788697 | orchestrator | 2025-11-23 02:21:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:13.788785 | orchestrator | 2025-11-23 02:21:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:16.829381 | orchestrator | 2025-11-23 02:21:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:16.829473 | orchestrator | 2025-11-23 02:21:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:19.873921 | orchestrator | 2025-11-23 02:21:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:19.874110 | orchestrator | 2025-11-23 02:21:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:22.913679 | orchestrator | 2025-11-23 02:21:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:22.913789 | orchestrator | 2025-11-23 02:21:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:25.952226 | orchestrator | 2025-11-23 02:21:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:25.952342 | orchestrator | 2025-11-23 02:21:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:28.995744 | orchestrator | 2025-11-23 02:21:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:28.995834 | orchestrator | 2025-11-23 02:21:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:32.040216 | orchestrator | 2025-11-23 02:21:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:32.040359 | orchestrator | 2025-11-23 02:21:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:35.081980 | orchestrator | 2025-11-23 02:21:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:35.082117 | orchestrator | 2025-11-23 02:21:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:38.119516 | orchestrator | 2025-11-23 02:21:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:38.119648 | orchestrator | 2025-11-23 02:21:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:41.160674 | orchestrator | 2025-11-23 02:21:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:41.160777 | orchestrator | 2025-11-23 02:21:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:44.200021 | orchestrator | 2025-11-23 02:21:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:44.201108 | orchestrator | 2025-11-23 02:21:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:47.242254 | orchestrator | 2025-11-23 02:21:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:47.242328 | orchestrator | 2025-11-23 02:21:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:50.283008 | orchestrator | 2025-11-23 02:21:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:50.283110 | orchestrator | 2025-11-23 02:21:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:53.328120 | orchestrator | 2025-11-23 02:21:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:53.328359 | orchestrator | 2025-11-23 02:21:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:56.360662 | orchestrator | 2025-11-23 02:21:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:56.360798 | orchestrator | 2025-11-23 02:21:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:21:59.398783 | orchestrator | 2025-11-23 02:21:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:21:59.398910 | orchestrator | 2025-11-23 02:21:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:02.439170 | orchestrator | 2025-11-23 02:22:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:02.439330 | orchestrator | 2025-11-23 02:22:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:05.476937 | orchestrator | 2025-11-23 02:22:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:05.477036 | orchestrator | 2025-11-23 02:22:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:08.521588 | orchestrator | 2025-11-23 02:22:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:08.521700 | orchestrator | 2025-11-23 02:22:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:11.558290 | orchestrator | 2025-11-23 02:22:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:11.558394 | orchestrator | 2025-11-23 02:22:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:14.598704 | orchestrator | 2025-11-23 02:22:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:14.598809 | orchestrator | 2025-11-23 02:22:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:17.632575 | orchestrator | 2025-11-23 02:22:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:17.632675 | orchestrator | 2025-11-23 02:22:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:20.672451 | orchestrator | 2025-11-23 02:22:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:20.672559 | orchestrator | 2025-11-23 02:22:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:23.716395 | orchestrator | 2025-11-23 02:22:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:23.716720 | orchestrator | 2025-11-23 02:22:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:26.754353 | orchestrator | 2025-11-23 02:22:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:26.754462 | orchestrator | 2025-11-23 02:22:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:29.790214 | orchestrator | 2025-11-23 02:22:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:29.790314 | orchestrator | 2025-11-23 02:22:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:32.827740 | orchestrator | 2025-11-23 02:22:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:32.827873 | orchestrator | 2025-11-23 02:22:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:35.866719 | orchestrator | 2025-11-23 02:22:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:35.866806 | orchestrator | 2025-11-23 02:22:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:38.910461 | orchestrator | 2025-11-23 02:22:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:38.910569 | orchestrator | 2025-11-23 02:22:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:41.949919 | orchestrator | 2025-11-23 02:22:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:41.950106 | orchestrator | 2025-11-23 02:22:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:44.987021 | orchestrator | 2025-11-23 02:22:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:44.987125 | orchestrator | 2025-11-23 02:22:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:48.025764 | orchestrator | 2025-11-23 02:22:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:48.025864 | orchestrator | 2025-11-23 02:22:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:51.063980 | orchestrator | 2025-11-23 02:22:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:51.064081 | orchestrator | 2025-11-23 02:22:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:54.103414 | orchestrator | 2025-11-23 02:22:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:54.103534 | orchestrator | 2025-11-23 02:22:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:22:57.141788 | orchestrator | 2025-11-23 02:22:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:22:57.141890 | orchestrator | 2025-11-23 02:22:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:00.180348 | orchestrator | 2025-11-23 02:23:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:00.180436 | orchestrator | 2025-11-23 02:23:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:03.222653 | orchestrator | 2025-11-23 02:23:03 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:03.222753 | orchestrator | 2025-11-23 02:23:03 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:06.259841 | orchestrator | 2025-11-23 02:23:06 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:06.259930 | orchestrator | 2025-11-23 02:23:06 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:09.300982 | orchestrator | 2025-11-23 02:23:09 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:09.301083 | orchestrator | 2025-11-23 02:23:09 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:12.341028 | orchestrator | 2025-11-23 02:23:12 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:12.341215 | orchestrator | 2025-11-23 02:23:12 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:15.381523 | orchestrator | 2025-11-23 02:23:15 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:15.381610 | orchestrator | 2025-11-23 02:23:15 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:18.422265 | orchestrator | 2025-11-23 02:23:18 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:18.422364 | orchestrator | 2025-11-23 02:23:18 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:21.459716 | orchestrator | 2025-11-23 02:23:21 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:21.459810 | orchestrator | 2025-11-23 02:23:21 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:24.501732 | orchestrator | 2025-11-23 02:23:24 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:24.501838 | orchestrator | 2025-11-23 02:23:24 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:27.545429 | orchestrator | 2025-11-23 02:23:27 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:27.545535 | orchestrator | 2025-11-23 02:23:27 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:30.584428 | orchestrator | 2025-11-23 02:23:30 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:30.584529 | orchestrator | 2025-11-23 02:23:30 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:33.628949 | orchestrator | 2025-11-23 02:23:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:33.629058 | orchestrator | 2025-11-23 02:23:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:36.672825 | orchestrator | 2025-11-23 02:23:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:36.672919 | orchestrator | 2025-11-23 02:23:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:39.714218 | orchestrator | 2025-11-23 02:23:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:39.714318 | orchestrator | 2025-11-23 02:23:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:42.755200 | orchestrator | 2025-11-23 02:23:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:42.755306 | orchestrator | 2025-11-23 02:23:42 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:45.793326 | orchestrator | 2025-11-23 02:23:45 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:45.793443 | orchestrator | 2025-11-23 02:23:45 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:48.828766 | orchestrator | 2025-11-23 02:23:48 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:48.828918 | orchestrator | 2025-11-23 02:23:48 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:51.863109 | orchestrator | 2025-11-23 02:23:51 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:51.863219 | orchestrator | 2025-11-23 02:23:51 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:54.903585 | orchestrator | 2025-11-23 02:23:54 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:54.903688 | orchestrator | 2025-11-23 02:23:54 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:23:57.955040 | orchestrator | 2025-11-23 02:23:57 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:23:57.955225 | orchestrator | 2025-11-23 02:23:57 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:00.998107 | orchestrator | 2025-11-23 02:24:00 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:00.998202 | orchestrator | 2025-11-23 02:24:00 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:04.036161 | orchestrator | 2025-11-23 02:24:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:04.036273 | orchestrator | 2025-11-23 02:24:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:07.073956 | orchestrator | 2025-11-23 02:24:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:07.074155 | orchestrator | 2025-11-23 02:24:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:10.111680 | orchestrator | 2025-11-23 02:24:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:10.111787 | orchestrator | 2025-11-23 02:24:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:13.156471 | orchestrator | 2025-11-23 02:24:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:13.156619 | orchestrator | 2025-11-23 02:24:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:16.200585 | orchestrator | 2025-11-23 02:24:16 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:16.200701 | orchestrator | 2025-11-23 02:24:16 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:19.243340 | orchestrator | 2025-11-23 02:24:19 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:19.243419 | orchestrator | 2025-11-23 02:24:19 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:22.281099 | orchestrator | 2025-11-23 02:24:22 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:22.281200 | orchestrator | 2025-11-23 02:24:22 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:25.324779 | orchestrator | 2025-11-23 02:24:25 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:25.324877 | orchestrator | 2025-11-23 02:24:25 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:28.364643 | orchestrator | 2025-11-23 02:24:28 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:28.364767 | orchestrator | 2025-11-23 02:24:28 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:31.405863 | orchestrator | 2025-11-23 02:24:31 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:31.405935 | orchestrator | 2025-11-23 02:24:31 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:34.447132 | orchestrator | 2025-11-23 02:24:34 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:34.447252 | orchestrator | 2025-11-23 02:24:34 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:37.481984 | orchestrator | 2025-11-23 02:24:37 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:37.482188 | orchestrator | 2025-11-23 02:24:37 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:40.520494 | orchestrator | 2025-11-23 02:24:40 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:40.520605 | orchestrator | 2025-11-23 02:24:40 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:43.562354 | orchestrator | 2025-11-23 02:24:43 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:43.562454 | orchestrator | 2025-11-23 02:24:43 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:46.601740 | orchestrator | 2025-11-23 02:24:46 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:46.601828 | orchestrator | 2025-11-23 02:24:46 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:49.641412 | orchestrator | 2025-11-23 02:24:49 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:49.641486 | orchestrator | 2025-11-23 02:24:49 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:52.681471 | orchestrator | 2025-11-23 02:24:52 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:52.681606 | orchestrator | 2025-11-23 02:24:52 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:55.722699 | orchestrator | 2025-11-23 02:24:55 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:55.722790 | orchestrator | 2025-11-23 02:24:55 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:24:58.762598 | orchestrator | 2025-11-23 02:24:58 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:24:58.762703 | orchestrator | 2025-11-23 02:24:58 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:01.805516 | orchestrator | 2025-11-23 02:25:01 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:01.805681 | orchestrator | 2025-11-23 02:25:01 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:04.844738 | orchestrator | 2025-11-23 02:25:04 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:04.844875 | orchestrator | 2025-11-23 02:25:04 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:07.883079 | orchestrator | 2025-11-23 02:25:07 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:07.883185 | orchestrator | 2025-11-23 02:25:07 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:10.926100 | orchestrator | 2025-11-23 02:25:10 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:10.926192 | orchestrator | 2025-11-23 02:25:10 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:13.966392 | orchestrator | 2025-11-23 02:25:13 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:13.966487 | orchestrator | 2025-11-23 02:25:13 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:17.010656 | orchestrator | 2025-11-23 02:25:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:17.010779 | orchestrator | 2025-11-23 02:25:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:20.051433 | orchestrator | 2025-11-23 02:25:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:20.051518 | orchestrator | 2025-11-23 02:25:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:23.090808 | orchestrator | 2025-11-23 02:25:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:23.090949 | orchestrator | 2025-11-23 02:25:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:26.132469 | orchestrator | 2025-11-23 02:25:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:26.132588 | orchestrator | 2025-11-23 02:25:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:29.171783 | orchestrator | 2025-11-23 02:25:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:29.171892 | orchestrator | 2025-11-23 02:25:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:32.213830 | orchestrator | 2025-11-23 02:25:32 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:32.213953 | orchestrator | 2025-11-23 02:25:32 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:35.255395 | orchestrator | 2025-11-23 02:25:35 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:35.255500 | orchestrator | 2025-11-23 02:25:35 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:38.292902 | orchestrator | 2025-11-23 02:25:38 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:38.293067 | orchestrator | 2025-11-23 02:25:38 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:41.331579 | orchestrator | 2025-11-23 02:25:41 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:41.331692 | orchestrator | 2025-11-23 02:25:41 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:44.373479 | orchestrator | 2025-11-23 02:25:44 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:44.373644 | orchestrator | 2025-11-23 02:25:44 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:47.413160 | orchestrator | 2025-11-23 02:25:47 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:47.413270 | orchestrator | 2025-11-23 02:25:47 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:50.452481 | orchestrator | 2025-11-23 02:25:50 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:50.452581 | orchestrator | 2025-11-23 02:25:50 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:53.490284 | orchestrator | 2025-11-23 02:25:53 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:53.490387 | orchestrator | 2025-11-23 02:25:53 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:56.536252 | orchestrator | 2025-11-23 02:25:56 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:56.536362 | orchestrator | 2025-11-23 02:25:56 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:25:59.578254 | orchestrator | 2025-11-23 02:25:59 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:25:59.578378 | orchestrator | 2025-11-23 02:25:59 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:02.616165 | orchestrator | 2025-11-23 02:26:02 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:02.616285 | orchestrator | 2025-11-23 02:26:02 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:05.653739 | orchestrator | 2025-11-23 02:26:05 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:05.653856 | orchestrator | 2025-11-23 02:26:05 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:08.696603 | orchestrator | 2025-11-23 02:26:08 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:08.696702 | orchestrator | 2025-11-23 02:26:08 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:11.742834 | orchestrator | 2025-11-23 02:26:11 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:11.742923 | orchestrator | 2025-11-23 02:26:11 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:14.786091 | orchestrator | 2025-11-23 02:26:14 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:14.786210 | orchestrator | 2025-11-23 02:26:14 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:17.826457 | orchestrator | 2025-11-23 02:26:17 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:17.826575 | orchestrator | 2025-11-23 02:26:17 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:20.870195 | orchestrator | 2025-11-23 02:26:20 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:20.870318 | orchestrator | 2025-11-23 02:26:20 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:23.912237 | orchestrator | 2025-11-23 02:26:23 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:23.912360 | orchestrator | 2025-11-23 02:26:23 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:26.955035 | orchestrator | 2025-11-23 02:26:26 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:26.955161 | orchestrator | 2025-11-23 02:26:26 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:29.998461 | orchestrator | 2025-11-23 02:26:29 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:29.998573 | orchestrator | 2025-11-23 02:26:29 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:33.045839 | orchestrator | 2025-11-23 02:26:33 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:33.045917 | orchestrator | 2025-11-23 02:26:33 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:36.082630 | orchestrator | 2025-11-23 02:26:36 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:36.082727 | orchestrator | 2025-11-23 02:26:36 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:39.120397 | orchestrator | 2025-11-23 02:26:39 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state STARTED 2025-11-23 02:26:39.120497 | orchestrator | 2025-11-23 02:26:39 | INFO  | Wait 1 second(s) until the next check 2025-11-23 02:26:42.155383 | orchestrator | 2025-11-23 02:26:42.155506 | orchestrator | 2025-11-23 02:26:42.155531 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-11-23 02:26:42.155550 | orchestrator | 2025-11-23 02:26:42.155567 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-11-23 02:26:42.155585 | orchestrator | Sunday 23 November 2025 01:01:05 +0000 (0:00:00.095) 0:00:00.095 ******* 2025-11-23 02:26:42.155603 | orchestrator | changed: [localhost] 2025-11-23 02:26:42.155621 | orchestrator | 2025-11-23 02:26:42.155639 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-11-23 02:26:42.155657 | orchestrator | Sunday 23 November 2025 01:01:05 +0000 (0:00:00.752) 0:00:00.847 ******* 2025-11-23 02:26:42.155674 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-11-23 02:26:42.155691 | orchestrator | 2025-11-23 02:26:42.155708 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155724 | orchestrator | 2025-11-23 02:26:42.155741 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155758 | orchestrator | 2025-11-23 02:26:42.155775 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155791 | orchestrator | 2025-11-23 02:26:42.155807 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155824 | orchestrator | 2025-11-23 02:26:42.155841 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155857 | orchestrator | 2025-11-23 02:26:42.155874 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155890 | orchestrator | 2025-11-23 02:26:42.155906 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155961 | orchestrator | 2025-11-23 02:26:42.155978 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.155996 | orchestrator | 2025-11-23 02:26:42.156012 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156028 | orchestrator | 2025-11-23 02:26:42.156044 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156061 | orchestrator | 2025-11-23 02:26:42.156077 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156094 | orchestrator | 2025-11-23 02:26:42.156110 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156126 | orchestrator | 2025-11-23 02:26:42.156143 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156191 | orchestrator | 2025-11-23 02:26:42.156209 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156226 | orchestrator | 2025-11-23 02:26:42.156243 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156259 | orchestrator | 2025-11-23 02:26:42.156276 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156293 | orchestrator | 2025-11-23 02:26:42.156309 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156325 | orchestrator | 2025-11-23 02:26:42.156341 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156358 | orchestrator | 2025-11-23 02:26:42.156374 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156390 | orchestrator | 2025-11-23 02:26:42.156407 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156424 | orchestrator | 2025-11-23 02:26:42.156440 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156456 | orchestrator | 2025-11-23 02:26:42.156472 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156488 | orchestrator | 2025-11-23 02:26:42.156504 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156517 | orchestrator | 2025-11-23 02:26:42.156527 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156537 | orchestrator | 2025-11-23 02:26:42.156546 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156556 | orchestrator | 2025-11-23 02:26:42.156565 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156574 | orchestrator | 2025-11-23 02:26:42.156584 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156593 | orchestrator | 2025-11-23 02:26:42.156603 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156612 | orchestrator | 2025-11-23 02:26:42.156622 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156631 | orchestrator | 2025-11-23 02:26:42.156656 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156666 | orchestrator | 2025-11-23 02:26:42.156676 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156685 | orchestrator | 2025-11-23 02:26:42.156695 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156704 | orchestrator | 2025-11-23 02:26:42.156714 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156723 | orchestrator | 2025-11-23 02:26:42.156733 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156742 | orchestrator | 2025-11-23 02:26:42.156752 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156761 | orchestrator | 2025-11-23 02:26:42.156771 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156780 | orchestrator | 2025-11-23 02:26:42.156790 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156799 | orchestrator | 2025-11-23 02:26:42.156809 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156818 | orchestrator | 2025-11-23 02:26:42.156828 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156837 | orchestrator | 2025-11-23 02:26:42.156846 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156856 | orchestrator | 2025-11-23 02:26:42.156866 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156875 | orchestrator | 2025-11-23 02:26:42.156885 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156894 | orchestrator | 2025-11-23 02:26:42.156912 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.156951 | orchestrator | 2025-11-23 02:26:42.156990 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157005 | orchestrator | 2025-11-23 02:26:42.157015 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157025 | orchestrator | 2025-11-23 02:26:42.157034 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157044 | orchestrator | 2025-11-23 02:26:42.157053 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157063 | orchestrator | 2025-11-23 02:26:42.157072 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157082 | orchestrator | 2025-11-23 02:26:42.157091 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157101 | orchestrator | 2025-11-23 02:26:42.157110 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157120 | orchestrator | 2025-11-23 02:26:42.157130 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157139 | orchestrator | 2025-11-23 02:26:42.157149 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157158 | orchestrator | 2025-11-23 02:26:42.157168 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157177 | orchestrator | 2025-11-23 02:26:42.157187 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157196 | orchestrator | 2025-11-23 02:26:42.157206 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157215 | orchestrator | 2025-11-23 02:26:42.157225 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157234 | orchestrator | 2025-11-23 02:26:42.157244 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157253 | orchestrator | 2025-11-23 02:26:42.157263 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157272 | orchestrator | 2025-11-23 02:26:42.157282 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157291 | orchestrator | 2025-11-23 02:26:42.157301 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157310 | orchestrator | 2025-11-23 02:26:42.157320 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157329 | orchestrator | 2025-11-23 02:26:42.157339 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157348 | orchestrator | 2025-11-23 02:26:42.157358 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157367 | orchestrator | 2025-11-23 02:26:42.157376 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157386 | orchestrator | 2025-11-23 02:26:42.157395 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157405 | orchestrator | 2025-11-23 02:26:42.157414 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157424 | orchestrator | 2025-11-23 02:26:42.157433 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157443 | orchestrator | 2025-11-23 02:26:42.157452 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157462 | orchestrator | 2025-11-23 02:26:42.157471 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157480 | orchestrator | 2025-11-23 02:26:42.157490 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157499 | orchestrator | 2025-11-23 02:26:42.157509 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157518 | orchestrator | 2025-11-23 02:26:42.157528 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157548 | orchestrator | 2025-11-23 02:26:42.157558 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157567 | orchestrator | 2025-11-23 02:26:42.157577 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157587 | orchestrator | 2025-11-23 02:26:42.157596 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157606 | orchestrator | 2025-11-23 02:26:42.157615 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157625 | orchestrator | 2025-11-23 02:26:42.157634 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157644 | orchestrator | 2025-11-23 02:26:42.157653 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157663 | orchestrator | 2025-11-23 02:26:42.157672 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157681 | orchestrator | 2025-11-23 02:26:42.157697 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157707 | orchestrator | 2025-11-23 02:26:42.157716 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157726 | orchestrator | 2025-11-23 02:26:42.157735 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157745 | orchestrator | 2025-11-23 02:26:42.157754 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157764 | orchestrator | 2025-11-23 02:26:42.157773 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157782 | orchestrator | 2025-11-23 02:26:42.157792 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157801 | orchestrator | 2025-11-23 02:26:42.157811 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157820 | orchestrator | 2025-11-23 02:26:42.157830 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157839 | orchestrator | 2025-11-23 02:26:42.157848 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157858 | orchestrator | 2025-11-23 02:26:42.157867 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157877 | orchestrator | 2025-11-23 02:26:42.157886 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157895 | orchestrator | 2025-11-23 02:26:42.157905 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157945 | orchestrator | 2025-11-23 02:26:42.157957 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157967 | orchestrator | 2025-11-23 02:26:42.157977 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.157986 | orchestrator | 2025-11-23 02:26:42.158003 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158014 | orchestrator | 2025-11-23 02:26:42.158085 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158095 | orchestrator | 2025-11-23 02:26:42.158105 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158114 | orchestrator | 2025-11-23 02:26:42.158124 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158134 | orchestrator | 2025-11-23 02:26:42.158143 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158153 | orchestrator | 2025-11-23 02:26:42.158163 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158172 | orchestrator | 2025-11-23 02:26:42.158182 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158192 | orchestrator | 2025-11-23 02:26:42.158201 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158219 | orchestrator | 2025-11-23 02:26:42.158229 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158239 | orchestrator | 2025-11-23 02:26:42.158248 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158258 | orchestrator | 2025-11-23 02:26:42.158268 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158277 | orchestrator | 2025-11-23 02:26:42.158287 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158297 | orchestrator | 2025-11-23 02:26:42.158307 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158316 | orchestrator | 2025-11-23 02:26:42.158326 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158336 | orchestrator | 2025-11-23 02:26:42.158350 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158368 | orchestrator | 2025-11-23 02:26:42.158384 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158401 | orchestrator | 2025-11-23 02:26:42.158417 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158435 | orchestrator | 2025-11-23 02:26:42.158454 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158471 | orchestrator | 2025-11-23 02:26:42.158488 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158498 | orchestrator | 2025-11-23 02:26:42.158508 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158517 | orchestrator | 2025-11-23 02:26:42.158527 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158536 | orchestrator | 2025-11-23 02:26:42.158546 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158555 | orchestrator | 2025-11-23 02:26:42.158565 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158574 | orchestrator | 2025-11-23 02:26:42.158584 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158594 | orchestrator | 2025-11-23 02:26:42.158603 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158613 | orchestrator | 2025-11-23 02:26:42.158622 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158632 | orchestrator | 2025-11-23 02:26:42.158641 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158651 | orchestrator | 2025-11-23 02:26:42.158660 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158670 | orchestrator | 2025-11-23 02:26:42.158680 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158689 | orchestrator | 2025-11-23 02:26:42.158699 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158708 | orchestrator | 2025-11-23 02:26:42.158718 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158727 | orchestrator | 2025-11-23 02:26:42.158737 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158747 | orchestrator | 2025-11-23 02:26:42.158756 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158766 | orchestrator | 2025-11-23 02:26:42.158776 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158785 | orchestrator | 2025-11-23 02:26:42.158795 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158804 | orchestrator | 2025-11-23 02:26:42.158814 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158823 | orchestrator | 2025-11-23 02:26:42.158839 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158857 | orchestrator | 2025-11-23 02:26:42.158866 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158876 | orchestrator | 2025-11-23 02:26:42.158886 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158895 | orchestrator | 2025-11-23 02:26:42.158904 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158939 | orchestrator | 2025-11-23 02:26:42.158951 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158960 | orchestrator | 2025-11-23 02:26:42.158970 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.158981 | orchestrator | 2025-11-23 02:26:42.158997 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159013 | orchestrator | 2025-11-23 02:26:42.159028 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159044 | orchestrator | 2025-11-23 02:26:42.159060 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159077 | orchestrator | 2025-11-23 02:26:42.159095 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159110 | orchestrator | 2025-11-23 02:26:42.159126 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159142 | orchestrator | 2025-11-23 02:26:42.159158 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159174 | orchestrator | 2025-11-23 02:26:42.159184 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159194 | orchestrator | 2025-11-23 02:26:42.159204 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159213 | orchestrator | 2025-11-23 02:26:42.159231 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159243 | orchestrator | 2025-11-23 02:26:42.159260 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159276 | orchestrator | 2025-11-23 02:26:42.159292 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159306 | orchestrator | 2025-11-23 02:26:42.159321 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159335 | orchestrator | 2025-11-23 02:26:42.159350 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159365 | orchestrator | 2025-11-23 02:26:42.159381 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159398 | orchestrator | 2025-11-23 02:26:42.159414 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159432 | orchestrator | 2025-11-23 02:26:42.159442 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159451 | orchestrator | 2025-11-23 02:26:42.159461 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159470 | orchestrator | 2025-11-23 02:26:42.159480 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159490 | orchestrator | 2025-11-23 02:26:42.159499 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159509 | orchestrator | 2025-11-23 02:26:42.159518 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159528 | orchestrator | 2025-11-23 02:26:42.159537 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159547 | orchestrator | 2025-11-23 02:26:42.159557 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159566 | orchestrator | 2025-11-23 02:26:42.159576 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159585 | orchestrator | 2025-11-23 02:26:42.159595 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159605 | orchestrator | 2025-11-23 02:26:42.159614 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159633 | orchestrator | 2025-11-23 02:26:42.159643 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-23 02:26:42.159653 | orchestrator | changed: [localhost] 2025-11-23 02:26:42.159663 | orchestrator | 2025-11-23 02:26:42.159674 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-11-23 02:26:42.159685 | orchestrator | Sunday 23 November 2025 02:23:58 +0000 (1:22:52.147) 1:22:52.995 ******* 2025-11-23 02:26:42.159696 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-11-23 02:26:42.159707 | orchestrator | 2025-11-23 02:26:42.159718 | orchestrator | STILL ALIVE [task 'Download ironic-agent kernel' is running] ******************* 2025-11-23 02:26:42.159729 | orchestrator | 2025-11-23 02:26:42.159740 | orchestrator | STILL ALIVE [task 'Download ironic-agent kernel' is running] ******************* 2025-11-23 02:26:42.159750 | orchestrator | changed: [localhost] 2025-11-23 02:26:42.159761 | orchestrator | 2025-11-23 02:26:42.159772 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-23 02:26:42.159783 | orchestrator | 2025-11-23 02:26:42.159794 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-23 02:26:42.159804 | orchestrator | Sunday 23 November 2025 02:26:38 +0000 (0:02:40.177) 1:25:33.173 ******* 2025-11-23 02:26:42.159815 | orchestrator | ok: [testbed-node-0] 2025-11-23 02:26:42.159826 | orchestrator | ok: [testbed-node-1] 2025-11-23 02:26:42.159837 | orchestrator | ok: [testbed-node-2] 2025-11-23 02:26:42.159847 | orchestrator | 2025-11-23 02:26:42.159858 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-23 02:26:42.159869 | orchestrator | Sunday 23 November 2025 02:26:38 +0000 (0:00:00.292) 1:25:33.465 ******* 2025-11-23 02:26:42.159880 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-11-23 02:26:42.159890 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-11-23 02:26:42.159902 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-11-23 02:26:42.159913 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-11-23 02:26:42.159951 | orchestrator | 2025-11-23 02:26:42.159963 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-11-23 02:26:42.159973 | orchestrator | skipping: no hosts matched 2025-11-23 02:26:42.159985 | orchestrator | 2025-11-23 02:26:42.159996 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 02:26:42.160014 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 02:26:42.160028 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 02:26:42.160040 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 02:26:42.160051 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 02:26:42.160062 | orchestrator | 2025-11-23 02:26:42.160073 | orchestrator | 2025-11-23 02:26:42.160084 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 02:26:42.160094 | orchestrator | Sunday 23 November 2025 02:26:39 +0000 (0:00:00.511) 1:25:33.976 ******* 2025-11-23 02:26:42.160105 | orchestrator | =============================================================================== 2025-11-23 02:26:42.160116 | orchestrator | Download ironic-agent initramfs -------------------------------------- 4972.15s 2025-11-23 02:26:42.160127 | orchestrator | Download ironic-agent kernel ------------------------------------------ 160.18s 2025-11-23 02:26:42.160137 | orchestrator | Ensure the destination directory exists --------------------------------- 0.75s 2025-11-23 02:26:42.160149 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-11-23 02:26:42.160167 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-11-23 02:26:42.160188 | orchestrator | 2025-11-23 02:26:42 | INFO  | Task ca529767-673f-439d-b816-7bd4ff18be41 is in state SUCCESS 2025-11-23 02:26:42.160208 | orchestrator | 2025-11-23 02:26:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:26:45.197041 | orchestrator | 2025-11-23 02:26:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:26:48.235483 | orchestrator | 2025-11-23 02:26:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:26:51.275720 | orchestrator | 2025-11-23 02:26:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:26:54.313944 | orchestrator | 2025-11-23 02:26:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:26:57.360040 | orchestrator | 2025-11-23 02:26:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:00.400516 | orchestrator | 2025-11-23 02:27:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:03.438979 | orchestrator | 2025-11-23 02:27:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:06.471444 | orchestrator | 2025-11-23 02:27:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:09.509621 | orchestrator | 2025-11-23 02:27:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:12.544479 | orchestrator | 2025-11-23 02:27:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:15.576883 | orchestrator | 2025-11-23 02:27:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:18.617427 | orchestrator | 2025-11-23 02:27:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:21.655331 | orchestrator | 2025-11-23 02:27:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:24.691794 | orchestrator | 2025-11-23 02:27:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:27.729371 | orchestrator | 2025-11-23 02:27:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:30.767544 | orchestrator | 2025-11-23 02:27:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:33.804455 | orchestrator | 2025-11-23 02:27:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:36.846350 | orchestrator | 2025-11-23 02:27:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:39.887010 | orchestrator | 2025-11-23 02:27:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-23 02:27:42.923869 | orchestrator | 2025-11-23 02:27:43.128000 | orchestrator | 2025-11-23 02:27:43.130567 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Nov 23 02:27:43 UTC 2025 2025-11-23 02:27:43.130604 | orchestrator | 2025-11-23 02:27:43.498651 | orchestrator | ok: Runtime: 1:50:48.673271 2025-11-23 02:27:43.858607 | 2025-11-23 02:27:43.858779 | TASK [Bootstrap services] 2025-11-23 02:27:44.578340 | orchestrator | 2025-11-23 02:27:44.578580 | orchestrator | # BOOTSTRAP 2025-11-23 02:27:44.578620 | orchestrator | 2025-11-23 02:27:44.578642 | orchestrator | + set -e 2025-11-23 02:27:44.578660 | orchestrator | + echo 2025-11-23 02:27:44.578674 | orchestrator | + echo '# BOOTSTRAP' 2025-11-23 02:27:44.578692 | orchestrator | + echo 2025-11-23 02:27:44.578742 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-11-23 02:27:44.586779 | orchestrator | + set -e 2025-11-23 02:27:44.586844 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-11-23 02:27:48.207952 | orchestrator | 2025-11-23 02:27:48 | INFO  | It takes a moment until task 0cd3f7b7-463a-4925-b1d9-9f5a4a9f644e (flavor-manager) has been started and output is visible here. 2025-11-23 02:27:56.159858 | orchestrator | 2025-11-23 02:27:51 | INFO  | Flavor SCS-1L-1 created 2025-11-23 02:27:56.160035 | orchestrator | 2025-11-23 02:27:51 | INFO  | Flavor SCS-1L-1-5 created 2025-11-23 02:27:56.160058 | orchestrator | 2025-11-23 02:27:52 | INFO  | Flavor SCS-1V-2 created 2025-11-23 02:27:56.160070 | orchestrator | 2025-11-23 02:27:52 | INFO  | Flavor SCS-1V-2-5 created 2025-11-23 02:27:56.160081 | orchestrator | 2025-11-23 02:27:52 | INFO  | Flavor SCS-1V-4 created 2025-11-23 02:27:56.160092 | orchestrator | 2025-11-23 02:27:52 | INFO  | Flavor SCS-1V-4-10 created 2025-11-23 02:27:56.160104 | orchestrator | 2025-11-23 02:27:52 | INFO  | Flavor SCS-1V-8 created 2025-11-23 02:27:56.160116 | orchestrator | 2025-11-23 02:27:52 | INFO  | Flavor SCS-1V-8-20 created 2025-11-23 02:27:56.160141 | orchestrator | 2025-11-23 02:27:53 | INFO  | Flavor SCS-2V-4 created 2025-11-23 02:27:56.160153 | orchestrator | 2025-11-23 02:27:53 | INFO  | Flavor SCS-2V-4-10 created 2025-11-23 02:27:56.160164 | orchestrator | 2025-11-23 02:27:53 | INFO  | Flavor SCS-2V-8 created 2025-11-23 02:27:56.160175 | orchestrator | 2025-11-23 02:27:53 | INFO  | Flavor SCS-2V-8-20 created 2025-11-23 02:27:56.160186 | orchestrator | 2025-11-23 02:27:53 | INFO  | Flavor SCS-2V-16 created 2025-11-23 02:27:56.160197 | orchestrator | 2025-11-23 02:27:53 | INFO  | Flavor SCS-2V-16-50 created 2025-11-23 02:27:56.160208 | orchestrator | 2025-11-23 02:27:53 | INFO  | Flavor SCS-4V-8 created 2025-11-23 02:27:56.160220 | orchestrator | 2025-11-23 02:27:54 | INFO  | Flavor SCS-4V-8-20 created 2025-11-23 02:27:56.160231 | orchestrator | 2025-11-23 02:27:54 | INFO  | Flavor SCS-4V-16 created 2025-11-23 02:27:56.160242 | orchestrator | 2025-11-23 02:27:54 | INFO  | Flavor SCS-4V-16-50 created 2025-11-23 02:27:56.160253 | orchestrator | 2025-11-23 02:27:54 | INFO  | Flavor SCS-4V-32 created 2025-11-23 02:27:56.160264 | orchestrator | 2025-11-23 02:27:54 | INFO  | Flavor SCS-4V-32-100 created 2025-11-23 02:27:56.160275 | orchestrator | 2025-11-23 02:27:54 | INFO  | Flavor SCS-8V-16 created 2025-11-23 02:27:56.160287 | orchestrator | 2025-11-23 02:27:54 | INFO  | Flavor SCS-8V-16-50 created 2025-11-23 02:27:56.160298 | orchestrator | 2025-11-23 02:27:55 | INFO  | Flavor SCS-8V-32 created 2025-11-23 02:27:56.160310 | orchestrator | 2025-11-23 02:27:55 | INFO  | Flavor SCS-8V-32-100 created 2025-11-23 02:27:56.160320 | orchestrator | 2025-11-23 02:27:55 | INFO  | Flavor SCS-16V-32 created 2025-11-23 02:27:56.160332 | orchestrator | 2025-11-23 02:27:55 | INFO  | Flavor SCS-16V-32-100 created 2025-11-23 02:27:56.160343 | orchestrator | 2025-11-23 02:27:55 | INFO  | Flavor SCS-2V-4-20s created 2025-11-23 02:27:56.160354 | orchestrator | 2025-11-23 02:27:55 | INFO  | Flavor SCS-4V-8-50s created 2025-11-23 02:27:56.160365 | orchestrator | 2025-11-23 02:27:55 | INFO  | Flavor SCS-8V-32-100s created 2025-11-23 02:27:58.051989 | orchestrator | 2025-11-23 02:27:58 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-11-23 02:27:58.112837 | orchestrator | 2025-11-23 02:27:58 | INFO  | Task 67652302-7040-42eb-9f9b-674bf34f20cb (bootstrap-basic) was prepared for execution. 2025-11-23 02:27:58.112960 | orchestrator | 2025-11-23 02:27:58 | INFO  | It takes a moment until task 67652302-7040-42eb-9f9b-674bf34f20cb (bootstrap-basic) has been started and output is visible here. 2025-11-23 02:28:51.595059 | orchestrator | 2025-11-23 02:28:51.595175 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-11-23 02:28:51.595211 | orchestrator | 2025-11-23 02:28:51.595234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-23 02:28:51.595246 | orchestrator | Sunday 23 November 2025 02:28:01 +0000 (0:00:00.055) 0:00:00.055 ******* 2025-11-23 02:28:51.595257 | orchestrator | ok: [localhost] 2025-11-23 02:28:51.595269 | orchestrator | 2025-11-23 02:28:51.595280 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-11-23 02:28:51.595292 | orchestrator | Sunday 23 November 2025 02:28:03 +0000 (0:00:01.635) 0:00:01.690 ******* 2025-11-23 02:28:51.595303 | orchestrator | ok: [localhost] 2025-11-23 02:28:51.595314 | orchestrator | 2025-11-23 02:28:51.595325 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-11-23 02:28:51.595336 | orchestrator | Sunday 23 November 2025 02:28:10 +0000 (0:00:07.060) 0:00:08.751 ******* 2025-11-23 02:28:51.595347 | orchestrator | changed: [localhost] 2025-11-23 02:28:51.595358 | orchestrator | 2025-11-23 02:28:51.595369 | orchestrator | TASK [Get volume type local] *************************************************** 2025-11-23 02:28:51.595387 | orchestrator | Sunday 23 November 2025 02:28:17 +0000 (0:00:07.217) 0:00:15.969 ******* 2025-11-23 02:28:51.595405 | orchestrator | ok: [localhost] 2025-11-23 02:28:51.595422 | orchestrator | 2025-11-23 02:28:51.595439 | orchestrator | TASK [Create volume type local] ************************************************ 2025-11-23 02:28:51.595459 | orchestrator | Sunday 23 November 2025 02:28:23 +0000 (0:00:05.869) 0:00:21.839 ******* 2025-11-23 02:28:51.595478 | orchestrator | changed: [localhost] 2025-11-23 02:28:51.595489 | orchestrator | 2025-11-23 02:28:51.595500 | orchestrator | TASK [Create public network] *************************************************** 2025-11-23 02:28:51.595511 | orchestrator | Sunday 23 November 2025 02:28:30 +0000 (0:00:06.526) 0:00:28.365 ******* 2025-11-23 02:28:51.595522 | orchestrator | changed: [localhost] 2025-11-23 02:28:51.595533 | orchestrator | 2025-11-23 02:28:51.595543 | orchestrator | TASK [Set public network to default] ******************************************* 2025-11-23 02:28:51.595554 | orchestrator | Sunday 23 November 2025 02:28:35 +0000 (0:00:05.156) 0:00:33.521 ******* 2025-11-23 02:28:51.595565 | orchestrator | changed: [localhost] 2025-11-23 02:28:51.595576 | orchestrator | 2025-11-23 02:28:51.595587 | orchestrator | TASK [Create public subnet] **************************************************** 2025-11-23 02:28:51.595611 | orchestrator | Sunday 23 November 2025 02:28:40 +0000 (0:00:05.664) 0:00:39.185 ******* 2025-11-23 02:28:51.595624 | orchestrator | changed: [localhost] 2025-11-23 02:28:51.595636 | orchestrator | 2025-11-23 02:28:51.595648 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-11-23 02:28:51.595661 | orchestrator | Sunday 23 November 2025 02:28:44 +0000 (0:00:03.965) 0:00:43.151 ******* 2025-11-23 02:28:51.595686 | orchestrator | changed: [localhost] 2025-11-23 02:28:51.595708 | orchestrator | 2025-11-23 02:28:51.595721 | orchestrator | TASK [Create manager role] ***************************************************** 2025-11-23 02:28:51.595733 | orchestrator | Sunday 23 November 2025 02:28:48 +0000 (0:00:03.443) 0:00:46.595 ******* 2025-11-23 02:28:51.595746 | orchestrator | ok: [localhost] 2025-11-23 02:28:51.595758 | orchestrator | 2025-11-23 02:28:51.595770 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-23 02:28:51.595783 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-23 02:28:51.595796 | orchestrator | 2025-11-23 02:28:51.595808 | orchestrator | 2025-11-23 02:28:51.595821 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-23 02:28:51.595883 | orchestrator | Sunday 23 November 2025 02:28:51 +0000 (0:00:03.164) 0:00:49.759 ******* 2025-11-23 02:28:51.595897 | orchestrator | =============================================================================== 2025-11-23 02:28:51.595910 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.22s 2025-11-23 02:28:51.595922 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.06s 2025-11-23 02:28:51.595934 | orchestrator | Create volume type local ------------------------------------------------ 6.53s 2025-11-23 02:28:51.595946 | orchestrator | Get volume type local --------------------------------------------------- 5.87s 2025-11-23 02:28:51.595958 | orchestrator | Set public network to default ------------------------------------------- 5.66s 2025-11-23 02:28:51.595969 | orchestrator | Create public network --------------------------------------------------- 5.16s 2025-11-23 02:28:51.595980 | orchestrator | Create public subnet ---------------------------------------------------- 3.97s 2025-11-23 02:28:51.595991 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.44s 2025-11-23 02:28:51.596001 | orchestrator | Create manager role ----------------------------------------------------- 3.16s 2025-11-23 02:28:51.596012 | orchestrator | Gathering Facts --------------------------------------------------------- 1.64s 2025-11-23 02:28:53.589581 | orchestrator | 2025-11-23 02:28:53 | INFO  | It takes a moment until task debdb0a8-bd94-4eff-aa03-7f72cb12b7a5 (image-manager) has been started and output is visible here. 2025-11-23 02:29:35.669705 | orchestrator | 2025-11-23 02:28:56 | INFO  | Processing image 'Cirros 0.6.2' 2025-11-23 02:29:35.669884 | orchestrator | 2025-11-23 02:28:56 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-11-23 02:29:35.669909 | orchestrator | 2025-11-23 02:28:56 | INFO  | Importing image Cirros 0.6.2 2025-11-23 02:29:35.669921 | orchestrator | 2025-11-23 02:28:56 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-23 02:29:35.669933 | orchestrator | 2025-11-23 02:28:58 | INFO  | Waiting for image to leave queued state... 2025-11-23 02:29:35.669945 | orchestrator | 2025-11-23 02:29:00 | INFO  | Waiting for import to complete... 2025-11-23 02:29:35.669956 | orchestrator | 2025-11-23 02:29:10 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-11-23 02:29:35.669967 | orchestrator | 2025-11-23 02:29:11 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-11-23 02:29:35.669978 | orchestrator | 2025-11-23 02:29:11 | INFO  | Setting internal_version = 0.6.2 2025-11-23 02:29:35.669989 | orchestrator | 2025-11-23 02:29:11 | INFO  | Setting image_original_user = cirros 2025-11-23 02:29:35.670000 | orchestrator | 2025-11-23 02:29:11 | INFO  | Adding tag os:cirros 2025-11-23 02:29:35.670012 | orchestrator | 2025-11-23 02:29:11 | INFO  | Setting property architecture: x86_64 2025-11-23 02:29:35.670074 | orchestrator | 2025-11-23 02:29:11 | INFO  | Setting property hw_disk_bus: scsi 2025-11-23 02:29:35.670085 | orchestrator | 2025-11-23 02:29:12 | INFO  | Setting property hw_rng_model: virtio 2025-11-23 02:29:35.670096 | orchestrator | 2025-11-23 02:29:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-23 02:29:35.670107 | orchestrator | 2025-11-23 02:29:12 | INFO  | Setting property hw_watchdog_action: reset 2025-11-23 02:29:35.670118 | orchestrator | 2025-11-23 02:29:12 | INFO  | Setting property hypervisor_type: qemu 2025-11-23 02:29:35.670129 | orchestrator | 2025-11-23 02:29:12 | INFO  | Setting property os_distro: cirros 2025-11-23 02:29:35.670140 | orchestrator | 2025-11-23 02:29:13 | INFO  | Setting property os_purpose: minimal 2025-11-23 02:29:35.670150 | orchestrator | 2025-11-23 02:29:13 | INFO  | Setting property replace_frequency: never 2025-11-23 02:29:35.670186 | orchestrator | 2025-11-23 02:29:13 | INFO  | Setting property uuid_validity: none 2025-11-23 02:29:35.670197 | orchestrator | 2025-11-23 02:29:13 | INFO  | Setting property provided_until: none 2025-11-23 02:29:35.670218 | orchestrator | 2025-11-23 02:29:14 | INFO  | Setting property image_description: Cirros 2025-11-23 02:29:35.670237 | orchestrator | 2025-11-23 02:29:14 | INFO  | Setting property image_name: Cirros 2025-11-23 02:29:35.670250 | orchestrator | 2025-11-23 02:29:14 | INFO  | Setting property internal_version: 0.6.2 2025-11-23 02:29:35.670262 | orchestrator | 2025-11-23 02:29:14 | INFO  | Setting property image_original_user: cirros 2025-11-23 02:29:35.670274 | orchestrator | 2025-11-23 02:29:15 | INFO  | Setting property os_version: 0.6.2 2025-11-23 02:29:35.670287 | orchestrator | 2025-11-23 02:29:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-23 02:29:35.670300 | orchestrator | 2025-11-23 02:29:15 | INFO  | Setting property image_build_date: 2023-05-30 2025-11-23 02:29:35.670312 | orchestrator | 2025-11-23 02:29:15 | INFO  | Checking status of 'Cirros 0.6.2' 2025-11-23 02:29:35.670323 | orchestrator | 2025-11-23 02:29:15 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-11-23 02:29:35.670333 | orchestrator | 2025-11-23 02:29:15 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-11-23 02:29:35.670344 | orchestrator | 2025-11-23 02:29:16 | INFO  | Processing image 'Cirros 0.6.3' 2025-11-23 02:29:35.670359 | orchestrator | 2025-11-23 02:29:16 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-11-23 02:29:35.670376 | orchestrator | 2025-11-23 02:29:16 | INFO  | Importing image Cirros 0.6.3 2025-11-23 02:29:35.670395 | orchestrator | 2025-11-23 02:29:16 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-23 02:29:35.670413 | orchestrator | 2025-11-23 02:29:18 | INFO  | Waiting for image to leave queued state... 2025-11-23 02:29:35.670430 | orchestrator | 2025-11-23 02:29:20 | INFO  | Waiting for import to complete... 2025-11-23 02:29:35.670473 | orchestrator | 2025-11-23 02:29:30 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-11-23 02:29:35.670492 | orchestrator | 2025-11-23 02:29:30 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-11-23 02:29:35.670507 | orchestrator | 2025-11-23 02:29:30 | INFO  | Setting internal_version = 0.6.3 2025-11-23 02:29:35.670518 | orchestrator | 2025-11-23 02:29:30 | INFO  | Setting image_original_user = cirros 2025-11-23 02:29:35.670528 | orchestrator | 2025-11-23 02:29:30 | INFO  | Adding tag os:cirros 2025-11-23 02:29:35.670539 | orchestrator | 2025-11-23 02:29:30 | INFO  | Setting property architecture: x86_64 2025-11-23 02:29:35.670550 | orchestrator | 2025-11-23 02:29:31 | INFO  | Setting property hw_disk_bus: scsi 2025-11-23 02:29:35.670560 | orchestrator | 2025-11-23 02:29:31 | INFO  | Setting property hw_rng_model: virtio 2025-11-23 02:29:35.670571 | orchestrator | 2025-11-23 02:29:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-23 02:29:35.670582 | orchestrator | 2025-11-23 02:29:31 | INFO  | Setting property hw_watchdog_action: reset 2025-11-23 02:29:35.670592 | orchestrator | 2025-11-23 02:29:31 | INFO  | Setting property hypervisor_type: qemu 2025-11-23 02:29:35.670603 | orchestrator | 2025-11-23 02:29:32 | INFO  | Setting property os_distro: cirros 2025-11-23 02:29:35.670623 | orchestrator | 2025-11-23 02:29:32 | INFO  | Setting property os_purpose: minimal 2025-11-23 02:29:35.670634 | orchestrator | 2025-11-23 02:29:32 | INFO  | Setting property replace_frequency: never 2025-11-23 02:29:35.670645 | orchestrator | 2025-11-23 02:29:32 | INFO  | Setting property uuid_validity: none 2025-11-23 02:29:35.670655 | orchestrator | 2025-11-23 02:29:33 | INFO  | Setting property provided_until: none 2025-11-23 02:29:35.670666 | orchestrator | 2025-11-23 02:29:33 | INFO  | Setting property image_description: Cirros 2025-11-23 02:29:35.670676 | orchestrator | 2025-11-23 02:29:33 | INFO  | Setting property image_name: Cirros 2025-11-23 02:29:35.670687 | orchestrator | 2025-11-23 02:29:33 | INFO  | Setting property internal_version: 0.6.3 2025-11-23 02:29:35.670697 | orchestrator | 2025-11-23 02:29:34 | INFO  | Setting property image_original_user: cirros 2025-11-23 02:29:35.670708 | orchestrator | 2025-11-23 02:29:34 | INFO  | Setting property os_version: 0.6.3 2025-11-23 02:29:35.670719 | orchestrator | 2025-11-23 02:29:34 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-23 02:29:35.670730 | orchestrator | 2025-11-23 02:29:34 | INFO  | Setting property image_build_date: 2024-09-26 2025-11-23 02:29:35.670747 | orchestrator | 2025-11-23 02:29:34 | INFO  | Checking status of 'Cirros 0.6.3' 2025-11-23 02:29:35.670757 | orchestrator | 2025-11-23 02:29:34 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-11-23 02:29:35.670768 | orchestrator | 2025-11-23 02:29:34 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-11-23 02:29:35.871197 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-11-23 02:29:37.958389 | orchestrator | 2025-11-23 02:29:37 | INFO  | date: 2025-11-22 2025-11-23 02:29:37.958487 | orchestrator | 2025-11-23 02:29:37 | INFO  | image: octavia-amphora-haproxy-2024.2.20251122.qcow2 2025-11-23 02:29:37.958505 | orchestrator | 2025-11-23 02:29:37 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251122.qcow2 2025-11-23 02:29:37.958642 | orchestrator | 2025-11-23 02:29:37 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251122.qcow2.CHECKSUM 2025-11-23 02:29:38.086482 | orchestrator | 2025-11-23 02:29:38 | INFO  | checksum: cd769587c347f21ed8603bd05fea0ad04934b3ec64bf9530a70bcda0e8478e33 2025-11-23 02:29:38.142985 | orchestrator | 2025-11-23 02:29:38 | INFO  | It takes a moment until task e73df5fb-21e8-4306-b328-ed4f2b2a8912 (image-manager) has been started and output is visible here. 2025-11-23 02:30:53.214963 | orchestrator | 2025-11-23 02:29:40 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-11-22' 2025-11-23 02:30:53.215108 | orchestrator | 2025-11-23 02:29:40 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251122.qcow2: 200 2025-11-23 02:30:53.215124 | orchestrator | 2025-11-23 02:29:40 | INFO  | Importing image OpenStack Octavia Amphora 2025-11-22 2025-11-23 02:30:53.215136 | orchestrator | 2025-11-23 02:29:40 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251122.qcow2 2025-11-23 02:30:53.215147 | orchestrator | 2025-11-23 02:29:42 | INFO  | Waiting for image to leave queued state... 2025-11-23 02:30:53.215156 | orchestrator | 2025-11-23 02:29:44 | INFO  | Waiting for import to complete... 2025-11-23 02:30:53.215166 | orchestrator | 2025-11-23 02:29:54 | INFO  | Waiting for import to complete... 2025-11-23 02:30:53.215204 | orchestrator | 2025-11-23 02:30:04 | INFO  | Waiting for import to complete... 2025-11-23 02:30:53.215214 | orchestrator | 2025-11-23 02:30:14 | INFO  | Waiting for import to complete... 2025-11-23 02:30:53.215223 | orchestrator | 2025-11-23 02:30:24 | INFO  | Waiting for import to complete... 2025-11-23 02:30:53.215231 | orchestrator | 2025-11-23 02:30:34 | INFO  | Waiting for import to complete... 2025-11-23 02:30:53.215240 | orchestrator | 2025-11-23 02:30:44 | INFO  | Waiting for image to leave queued state... 2025-11-23 02:30:53.215249 | orchestrator | 2025-11-23 02:30:46 | INFO  | Waiting for image to leave queued state... 2025-11-23 02:30:53.215258 | orchestrator | 2025-11-23 02:30:48 | INFO  | Waiting for image to leave queued state... 2025-11-23 02:30:53.215266 | orchestrator | 2025-11-23 02:30:50 | INFO  | Waiting for image to leave queued state... 2025-11-23 02:30:53.215275 | orchestrator | 2025-11-23 02:30:52 | ERROR  | Image OpenStack Octavia Amphora 2025-11-22 seems stuck in queued state 2025-11-23 02:30:53.215286 | orchestrator | 2025-11-23 02:30:53 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-11-23 02:30:53.215295 | orchestrator | 2025-11-23 02:30:53 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-11-23 02:30:53.215304 | orchestrator | 2025-11-23 02:30:53 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-11-23 02:30:53.215313 | orchestrator | 2025-11-23 02:30:53 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-11-23 02:30:53.215322 | orchestrator | 2025-11-23 02:30:53.215331 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2025-11-23 02:30:53.531618 | orchestrator | ERROR 2025-11-23 02:30:53.531860 | orchestrator | { 2025-11-23 02:30:53.531904 | orchestrator | "delta": "0:03:09.225640", 2025-11-23 02:30:53.531928 | orchestrator | "end": "2025-11-23 02:30:53.429316", 2025-11-23 02:30:53.531948 | orchestrator | "msg": "non-zero return code", 2025-11-23 02:30:53.531967 | orchestrator | "rc": 1, 2025-11-23 02:30:53.531986 | orchestrator | "start": "2025-11-23 02:27:44.203676" 2025-11-23 02:30:53.532004 | orchestrator | } failure 2025-11-23 02:30:53.544660 | 2025-11-23 02:30:53.544768 | PLAY RECAP 2025-11-23 02:30:53.544823 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-11-23 02:30:53.544848 | 2025-11-23 02:30:53.815577 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-11-23 02:30:53.817517 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-23 02:30:54.620909 | 2025-11-23 02:30:54.621180 | PLAY [Post output play] 2025-11-23 02:30:54.639102 | 2025-11-23 02:30:54.639245 | LOOP [stage-output : Register sources] 2025-11-23 02:30:54.711551 | 2025-11-23 02:30:54.711979 | TASK [stage-output : Check sudo] 2025-11-23 02:30:55.780180 | orchestrator | sudo: a password is required 2025-11-23 02:30:56.253469 | orchestrator | ok: Runtime: 0:00:00.239327 2025-11-23 02:30:56.260700 | 2025-11-23 02:30:56.260862 | LOOP [stage-output : Set source and destination for files and folders] 2025-11-23 02:30:56.292970 | 2025-11-23 02:30:56.293213 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-11-23 02:30:56.357376 | orchestrator | ok 2025-11-23 02:30:56.364159 | 2025-11-23 02:30:56.364286 | LOOP [stage-output : Ensure target folders exist] 2025-11-23 02:30:56.823150 | orchestrator | ok: "docs" 2025-11-23 02:30:56.823590 | 2025-11-23 02:30:57.080515 | orchestrator | ok: "artifacts" 2025-11-23 02:30:57.328145 | orchestrator | ok: "logs" 2025-11-23 02:30:57.354976 | 2025-11-23 02:30:57.355189 | LOOP [stage-output : Copy files and folders to staging folder] 2025-11-23 02:30:57.396326 | 2025-11-23 02:30:57.396635 | TASK [stage-output : Make all log files readable] 2025-11-23 02:30:57.700317 | orchestrator | ok 2025-11-23 02:30:57.710154 | 2025-11-23 02:30:57.710301 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-11-23 02:30:57.745585 | orchestrator | skipping: Conditional result was False 2025-11-23 02:30:57.758607 | 2025-11-23 02:30:57.758815 | TASK [stage-output : Discover log files for compression] 2025-11-23 02:30:57.775148 | orchestrator | skipping: Conditional result was False 2025-11-23 02:30:57.784801 | 2025-11-23 02:30:57.784949 | LOOP [stage-output : Archive everything from logs] 2025-11-23 02:30:57.828736 | 2025-11-23 02:30:57.828934 | PLAY [Post cleanup play] 2025-11-23 02:30:57.838413 | 2025-11-23 02:30:57.838536 | TASK [Set cloud fact (Zuul deployment)] 2025-11-23 02:30:57.912083 | orchestrator | ok 2025-11-23 02:30:57.923738 | 2025-11-23 02:30:57.923885 | TASK [Set cloud fact (local deployment)] 2025-11-23 02:30:57.969444 | orchestrator | skipping: Conditional result was False 2025-11-23 02:30:57.982970 | 2025-11-23 02:30:57.983154 | TASK [Clean the cloud environment] 2025-11-23 02:30:59.004179 | orchestrator | 2025-11-23 02:30:59 - clean up servers 2025-11-23 02:31:00.397890 | orchestrator | 2025-11-23 02:31:00 - testbed-manager 2025-11-23 02:31:00.485735 | orchestrator | 2025-11-23 02:31:00 - testbed-node-0 2025-11-23 02:31:00.581139 | orchestrator | 2025-11-23 02:31:00 - testbed-node-5 2025-11-23 02:31:00.667936 | orchestrator | 2025-11-23 02:31:00 - testbed-node-3 2025-11-23 02:31:00.753498 | orchestrator | 2025-11-23 02:31:00 - testbed-node-2 2025-11-23 02:31:00.843214 | orchestrator | 2025-11-23 02:31:00 - testbed-node-1 2025-11-23 02:31:00.936690 | orchestrator | 2025-11-23 02:31:00 - testbed-node-4 2025-11-23 02:31:01.030921 | orchestrator | 2025-11-23 02:31:01 - clean up keypairs 2025-11-23 02:31:01.049068 | orchestrator | 2025-11-23 02:31:01 - testbed 2025-11-23 02:31:01.075474 | orchestrator | 2025-11-23 02:31:01 - wait for servers to be gone 2025-11-23 02:31:11.929380 | orchestrator | 2025-11-23 02:31:11 - clean up ports 2025-11-23 02:31:12.142319 | orchestrator | 2025-11-23 02:31:12 - 0661a76a-f4a0-4b39-b71a-baaa5b5ae921 2025-11-23 02:31:12.450565 | orchestrator | 2025-11-23 02:31:12 - 07cc13b4-a952-4b94-80bd-932a83d1b64b 2025-11-23 02:31:12.683187 | orchestrator | 2025-11-23 02:31:12 - 2495bad2-fdf9-42ba-ad0a-669365662eff 2025-11-23 02:31:13.072888 | orchestrator | 2025-11-23 02:31:13 - 5899e256-7ea2-4b16-8cdc-adbf417d203d 2025-11-23 02:31:13.286150 | orchestrator | 2025-11-23 02:31:13 - a4094702-4847-4ac0-9614-6b88a2775252 2025-11-23 02:31:13.497403 | orchestrator | 2025-11-23 02:31:13 - eed76004-bdbc-4a02-9c9f-2671628ad413 2025-11-23 02:31:13.717551 | orchestrator | 2025-11-23 02:31:13 - f6dcbdba-13af-46cb-9555-819a9fc258d1 2025-11-23 02:31:13.942362 | orchestrator | 2025-11-23 02:31:13 - clean up volumes 2025-11-23 02:31:14.052142 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-manager-base 2025-11-23 02:31:14.090151 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-3-node-base 2025-11-23 02:31:14.132710 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-5-node-base 2025-11-23 02:31:14.176702 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-0-node-base 2025-11-23 02:31:14.216217 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-1-node-base 2025-11-23 02:31:14.256911 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-2-node-base 2025-11-23 02:31:14.297853 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-4-node-base 2025-11-23 02:31:14.343377 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-3-node-3 2025-11-23 02:31:14.389354 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-1-node-4 2025-11-23 02:31:14.449092 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-2-node-5 2025-11-23 02:31:14.496441 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-4-node-4 2025-11-23 02:31:14.540174 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-7-node-4 2025-11-23 02:31:14.591645 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-0-node-3 2025-11-23 02:31:14.634183 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-8-node-5 2025-11-23 02:31:14.680291 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-6-node-3 2025-11-23 02:31:14.722913 | orchestrator | 2025-11-23 02:31:14 - testbed-volume-5-node-5 2025-11-23 02:31:14.769427 | orchestrator | 2025-11-23 02:31:14 - disconnect routers 2025-11-23 02:31:14.892719 | orchestrator | 2025-11-23 02:31:14 - testbed 2025-11-23 02:31:15.938374 | orchestrator | 2025-11-23 02:31:15 - clean up subnets 2025-11-23 02:31:16.015481 | orchestrator | 2025-11-23 02:31:16 - subnet-testbed-management 2025-11-23 02:31:16.218483 | orchestrator | 2025-11-23 02:31:16 - clean up networks 2025-11-23 02:31:16.421704 | orchestrator | 2025-11-23 02:31:16 - net-testbed-management 2025-11-23 02:31:17.198965 | orchestrator | 2025-11-23 02:31:17 - clean up security groups 2025-11-23 02:31:17.241813 | orchestrator | 2025-11-23 02:31:17 - testbed-management 2025-11-23 02:31:17.363615 | orchestrator | 2025-11-23 02:31:17 - testbed-node 2025-11-23 02:31:17.479626 | orchestrator | 2025-11-23 02:31:17 - clean up floating ips 2025-11-23 02:31:17.517197 | orchestrator | 2025-11-23 02:31:17 - 81.163.193.118 2025-11-23 02:31:17.908378 | orchestrator | 2025-11-23 02:31:17 - clean up routers 2025-11-23 02:31:18.010530 | orchestrator | 2025-11-23 02:31:18 - testbed 2025-11-23 02:31:19.070296 | orchestrator | ok: Runtime: 0:00:20.729566 2025-11-23 02:31:19.073953 | 2025-11-23 02:31:19.074070 | PLAY RECAP 2025-11-23 02:31:19.074273 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-11-23 02:31:19.074412 | 2025-11-23 02:31:19.235923 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-23 02:31:19.238417 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-23 02:31:19.980206 | 2025-11-23 02:31:19.980368 | PLAY [Cleanup play] 2025-11-23 02:31:20.000172 | 2025-11-23 02:31:20.000315 | TASK [Set cloud fact (Zuul deployment)] 2025-11-23 02:31:20.052687 | orchestrator | ok 2025-11-23 02:31:20.059499 | 2025-11-23 02:31:20.059634 | TASK [Set cloud fact (local deployment)] 2025-11-23 02:31:20.094154 | orchestrator | skipping: Conditional result was False 2025-11-23 02:31:20.102467 | 2025-11-23 02:31:20.102593 | TASK [Clean the cloud environment] 2025-11-23 02:31:21.380121 | orchestrator | 2025-11-23 02:31:21 - clean up servers 2025-11-23 02:31:21.901630 | orchestrator | 2025-11-23 02:31:21 - clean up keypairs 2025-11-23 02:31:21.921423 | orchestrator | 2025-11-23 02:31:21 - wait for servers to be gone 2025-11-23 02:31:21.963760 | orchestrator | 2025-11-23 02:31:21 - clean up ports 2025-11-23 02:31:22.056063 | orchestrator | 2025-11-23 02:31:22 - clean up volumes 2025-11-23 02:31:22.129219 | orchestrator | 2025-11-23 02:31:22 - disconnect routers 2025-11-23 02:31:22.164455 | orchestrator | 2025-11-23 02:31:22 - clean up subnets 2025-11-23 02:31:22.193078 | orchestrator | 2025-11-23 02:31:22 - clean up networks 2025-11-23 02:31:22.830590 | orchestrator | 2025-11-23 02:31:22 - clean up security groups 2025-11-23 02:31:22.875685 | orchestrator | 2025-11-23 02:31:22 - clean up floating ips 2025-11-23 02:31:22.902528 | orchestrator | 2025-11-23 02:31:22 - clean up routers 2025-11-23 02:31:23.140120 | orchestrator | ok: Runtime: 0:00:02.021956 2025-11-23 02:31:23.143964 | 2025-11-23 02:31:23.144148 | PLAY RECAP 2025-11-23 02:31:23.144268 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-11-23 02:31:23.144330 | 2025-11-23 02:31:23.275394 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-23 02:31:23.277612 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-23 02:31:24.140654 | 2025-11-23 02:31:24.140865 | PLAY [Base post-fetch] 2025-11-23 02:31:24.157339 | 2025-11-23 02:31:24.157533 | TASK [fetch-output : Set log path for multiple nodes] 2025-11-23 02:31:24.234848 | orchestrator | skipping: Conditional result was False 2025-11-23 02:31:24.243333 | 2025-11-23 02:31:24.243491 | TASK [fetch-output : Set log path for single node] 2025-11-23 02:31:24.306923 | orchestrator | ok 2025-11-23 02:31:24.315845 | 2025-11-23 02:31:24.316000 | LOOP [fetch-output : Ensure local output dirs] 2025-11-23 02:31:24.895491 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/work/logs" 2025-11-23 02:31:25.179078 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/work/artifacts" 2025-11-23 02:31:25.503364 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/14aeca2d96864489b1e086b610ab7ca4/work/docs" 2025-11-23 02:31:25.519902 | 2025-11-23 02:31:25.520050 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-11-23 02:31:26.499953 | orchestrator | changed: .d..t...... ./ 2025-11-23 02:31:26.500325 | orchestrator | changed: All items complete 2025-11-23 02:31:26.500390 | 2025-11-23 02:31:27.239288 | orchestrator | changed: .d..t...... ./ 2025-11-23 02:31:27.993118 | orchestrator | changed: .d..t...... ./ 2025-11-23 02:31:28.024683 | 2025-11-23 02:31:28.025466 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-11-23 02:31:28.058739 | orchestrator | skipping: Conditional result was False 2025-11-23 02:31:28.061079 | orchestrator | skipping: Conditional result was False 2025-11-23 02:31:28.074071 | 2025-11-23 02:31:28.074160 | PLAY RECAP 2025-11-23 02:31:28.074212 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-11-23 02:31:28.074239 | 2025-11-23 02:31:28.219026 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-23 02:31:28.222823 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-23 02:31:28.975965 | 2025-11-23 02:31:28.976130 | PLAY [Base post] 2025-11-23 02:31:28.991195 | 2025-11-23 02:31:28.991340 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-11-23 02:31:30.242966 | orchestrator | changed 2025-11-23 02:31:30.252929 | 2025-11-23 02:31:30.253063 | PLAY RECAP 2025-11-23 02:31:30.253145 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-11-23 02:31:30.253224 | 2025-11-23 02:31:30.386268 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-23 02:31:30.390461 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-11-23 02:31:31.244778 | 2025-11-23 02:31:31.244958 | PLAY [Base post-logs] 2025-11-23 02:31:31.258785 | 2025-11-23 02:31:31.258959 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-11-23 02:31:31.764608 | localhost | changed 2025-11-23 02:31:31.779389 | 2025-11-23 02:31:31.779598 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-11-23 02:31:31.818869 | localhost | ok 2025-11-23 02:31:31.826782 | 2025-11-23 02:31:31.826989 | TASK [Set zuul-log-path fact] 2025-11-23 02:31:31.856491 | localhost | ok 2025-11-23 02:31:31.871313 | 2025-11-23 02:31:31.871475 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-23 02:31:31.896658 | localhost | ok 2025-11-23 02:31:31.916350 | 2025-11-23 02:31:31.916474 | TASK [upload-logs : Create log directories] 2025-11-23 02:31:32.427289 | localhost | changed 2025-11-23 02:31:32.433852 | 2025-11-23 02:31:32.434051 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-11-23 02:31:32.969981 | localhost -> localhost | ok: Runtime: 0:00:00.007101 2025-11-23 02:31:32.977361 | 2025-11-23 02:31:32.977533 | TASK [upload-logs : Upload logs to log server] 2025-11-23 02:31:33.612398 | localhost | Output suppressed because no_log was given 2025-11-23 02:31:33.617118 | 2025-11-23 02:31:33.617369 | LOOP [upload-logs : Compress console log and json output] 2025-11-23 02:31:33.677118 | localhost | skipping: Conditional result was False 2025-11-23 02:31:33.682199 | localhost | skipping: Conditional result was False 2025-11-23 02:31:33.690421 | 2025-11-23 02:31:33.690763 | LOOP [upload-logs : Upload compressed console log and json output] 2025-11-23 02:31:33.744404 | localhost | skipping: Conditional result was False 2025-11-23 02:31:33.745247 | 2025-11-23 02:31:33.748698 | localhost | skipping: Conditional result was False 2025-11-23 02:31:33.757655 | 2025-11-23 02:31:33.757944 | LOOP [upload-logs : Upload console log and json output]